query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
fc25e2217640f637c5b9c43def7dd8d1
|
Design and MinION testing of a nanopore targeted gene sequencing panel for chronic lymphocytic leukemia
|
[
{
"docid": "ee785105669d58052ad3b3a3954ba9fb",
"text": "Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.",
"title": ""
}
] |
[
{
"docid": "30dffba83b24e835a083774aa91e6c59",
"text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.",
"title": ""
},
{
"docid": "aa0f1910a52018d224dbe65b2be07a4f",
"text": "We describe a system that uses automated planning to synthesize correct and efficient parallel graph programs from high-level algorithmic specifications. Automated planning allows us to use constraints to declaratively encode program transformations such as scheduling, implementation selection, and insertion of synchronization. Each plan emitted by the planner satisfies all constraints simultaneously, and corresponds to a composition of these transformations. In this way, we obtain an integrated compilation approach for a very challenging problem domain. We have used this system to synthesize parallel programs for four graph problems: triangle counting, maximal independent set computation, preflow-push maxflow, and connected components. Experiments on a variety of inputs show that the synthesized implementations perform competitively with hand-written, highly-tuned code.",
"title": ""
},
{
"docid": "bce79146a0316fd10c6ee492ff0b5686",
"text": "Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one highresolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing.",
"title": ""
},
{
"docid": "351beace260a731aaf8dcf6e6870ad99",
"text": "The field of Explainable Artificial Intelligence has taken steps towards increasing transparency in the decision-making process of machine learning models for classification tasks. Understanding the reasons behind the predictions of models increases our trust in them and lowers the risks of using them. In an effort to extend this to other tasks apart from classification, this thesis explores the interpretability aspect for sequence tagging models for the task of Named Entity Recognition (NER). This work proposes two approaches for adapting LIME, an interpretation method for classification, to sequence tagging and NER. The first approach is a direct adaptation of LIME to the task, while the second includes adaptations following the idea that entities are conceived as a group of words and we would like one explanation for the whole entity. Given the challenges in the evaluation of the interpretation method, this work proposes an extensive evaluation from different angles. It includes a quantitative analysis using the AOPC metric; a qualitative analysis that studies the explanations at instance and dataset levels as well as the semantic structure of the embeddings and the explanations; and a human evaluation to validate the model's behaviour. The evaluation has discovered patterns and characteristics to take into account when explaining NER models.",
"title": ""
},
{
"docid": "03e5084a5e33205fc4deaeb69c66b460",
"text": "In this paper we present a general convex optimization approach for solving highdimensional tensor regression problems under low-dimensional structural assumptions. We consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates. Our framework leads to upper bounds in terms of two very simple quantities, the Gaussian width of a convex set in tensor space and the intrinsic dimension of the low-dimensional tensor subspace. These general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including multi-response regression, vector auto-regressive models, low-rank tensor models and pairwise interaction models. Moreover, in many of these settings we prove that the resulting estimates are minimax optimal. Departments of Statistics and Computer Science, and Optimization Group at Wisconsin Institute for Discovery, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI 53706. The research of Garvesh Raskutti is supported in part by NSF Grant DMS-1407028 Department of Statistics and Morgridge Institute for Research, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI 53706. The research of Ming Yuan was supported in part by NSF FRG Grant DMS-1265202, and NIH Grant 1-U54AI117924-01.",
"title": ""
},
{
"docid": "7437f0c8549cb8f73f352f8043a80d19",
"text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.",
"title": ""
},
{
"docid": "b759613b1eedd29d32fbbc118767b515",
"text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "10990c819cbc6dfb88b4c2de829f27f1",
"text": "Building on the fraudulent foundation established by atheist Sigmund Freud, psychoanalyst Erik Erikson has proposed a series of eight \"life cycles,\" each with an accompanying \"life crisis,\" to explain both human behavior and man's religious tendencies. Erikson's extensive application of his theories to the life of Martin Luther reveals his contempt for the living God who has revealed Himself in Scripture. This paper will consider Erikson's view of man, sin, redemption, and religion, along with an analysis of his eight \"life cycles.\" Finally, we will critique his attempted psychoanalysis of Martin Luther.",
"title": ""
},
{
"docid": "71ae8b4cc2f4e531be95cdbb147c75eb",
"text": "This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation on the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.",
"title": ""
},
{
"docid": "9581c692787cfef1ce2916100add4c1e",
"text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.",
"title": ""
},
{
"docid": "746b9e9e1fdacc76d3acb4f78d824901",
"text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.",
"title": ""
},
{
"docid": "fc3aeb32f617f7a186d41d56b559a2aa",
"text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.",
"title": ""
},
{
"docid": "86aa313233bee3f040604ffa214af4bf",
"text": "It is hypothesized that collective efficacy, defined as social cohesion among neighbors combined with their willingness to intervene on behalf of the common good, is linked to reduced violence. This hypothesis was tested on a 1995 survey of 8782 residents of 343 neighborhoods in Chicago, Illinois. Multilevel analyses showed that a measure of collective efficacy yields a high between-neighborhood reliability and is negatively associated with variations in violence, when individual-level characteristics, measurement error, and prior violence are controlled. Associations of concentrated disadvantage and residential instability with violence are largely mediated by collective efficacy.",
"title": ""
},
{
"docid": "39bf7e3a8e75353a3025e2c0f18768f9",
"text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.",
"title": ""
},
{
"docid": "31dbedbcdb930ead1f8274ff2c181fcb",
"text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.",
"title": ""
},
{
"docid": "c25a62b5798e7c08579efb61c35f2c66",
"text": "In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most $O(\\log d)$ iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most $O(\\log d/\\epsilon^4)$ iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by $O(\\log d/\\epsilon^2)$, which outperforms existing first-order methods.",
"title": ""
},
{
"docid": "4482faa886c3216bf35265da250633c4",
"text": "Acidification of rain-water is identified as one of the most serious environmental problems of transboundary nature. Acid rain is mainly a mixture of sulphuric and nitric acids depending upon the relative quantities of oxides of sulphur and nitrogen emissions. Due to the interaction of these acids with other constituents of the atmosphere, protons are released causing increase in the soil acidity Lowering of soil pH mobilizes and leaches away nutrient cations and increases availability of toxic heavy metals. Such changes in the soil chemical characteristics reduce the soil fertility which ultimately causes the negative impact on growth and productivity of forest trees and crop plants. Acidification of water bodies causes large scale negative impact on aquatic organisms including fishes. Acidification has some indirect effects on human health also. Acid rain affects each and every components of ecosystem. Acid rain also damages man-made materials and structures. By reducing the emission of the precursors of acid rain and to some extent by liming, the problem of acidification of terrestrial and aquatic ecosystem has been reduced during last two decades.",
"title": ""
},
{
"docid": "0ebc0724a8c966e93e05fb7fce80c1ab",
"text": "Firms in the financial services industry have been faced with the dramatic and relatively recent emergence of new technology innovations, and process disruptions. The industry as a whole, and many new fintech start-ups are looking for new pathways to successful business models, the creation of enhanced customer experience, and new approaches that result in services transformation. Industry and academic observers believe this to be more of a revolution than a set of less impactful changes, with financial services as a whole due for major improvements in efficiency, in customer centricity and informedness. The long-standing dominance of leading firms that are not able to figure out how to effectively hook up with the “Fintech Revolution” is at stake. This article presents a new fintech innovation mapping approach that enables the assessment of the extent to which there are changes and transformations in four key areas of the financial services industry. We discuss: (1) operations management in financial services, and the changes that are occurring there; (2) technology innovations that have begun to leverage the execution and stakeholder value associated with payments settlement, cryptocurrencies, blockchain technologies, and cross-border payment services; (3) multiple fintech innovations that have impacted lending and deposit services, peer-to-peer (P2P) lending and the use of social media; (4) issues with respect to investments, financial markets, trading, risk management, robo-advisory and related services that are influenced by blockchain and fintech innovations.",
"title": ""
},
{
"docid": "1e53e57544d6f4250396800b5792de5f",
"text": "Several data mining algorithms use iterative optimization methods for learning predictive models. It is not easy to determine upfront which optimization method will perform best or converge fast for such tasks. In this paper, we analyze Meta Algorithms (MAs) which work by adaptively combining iterates from a pool of base optimization algorithms. We show that the performance of MAs are competitive with the best convex combination of the iterates from the base algorithms for online as well as batch convex optimization problems. We illustrate the effectiveness of MAs on the problem of portfolio selection in the stock market and use several existing ideas for portfolio selection as base algorithms. Using daily S\\&P500 data for the past 21 years and a benchmark NYSE dataset, we show that MAs outperform existing portfolio selection algorithms with provable guarantees by several orders of magnitude, and match the performance of the best heuristics in the pool.",
"title": ""
}
] |
scidocsrr
|
8497670de42f22b2b3d4de50899958e4
|
CUDA vs OpenACC: Performance Case Studies with Kernel Benchmarks and a Memory-Bound CFD Application
|
[
{
"docid": "6537921976c2779d1e7d921c939ec64d",
"text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.",
"title": ""
}
] |
[
{
"docid": "15de232c8daf22cf1a1592a21e1d9df3",
"text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.",
"title": ""
},
{
"docid": "4aa17982590e86fea90267e4386e2ef1",
"text": "There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized \"A/B\" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.",
"title": ""
},
{
"docid": "e2cd2edc74d932f1632a858ac124f902",
"text": "Large writes are beneficial both on individual disks and on disk arrays, e.g., RAID-5. The presented design enables large writes of internal B-tree nodes and leaves. It supports both in-place updates and large append-only (“log-structured”) write operations within the same storage volume, within the same B-tree, and even at the same time. The essence of the proposal is to make page migration inexpensive, to migrate pages while writing them, and to make such migration optional rather than mandatory as in log-structured file systems. The inexpensive page migration also aids traditional defragmentation as well as consolidation of free space needed for future large writes. These advantages are achieved with a very limited modification to conventional B-trees that also simplifies other B-tree operations, e.g., key range locking and compression. Prior proposals and prototypes implemented transacted B-tree on top of log-structured file systems and added transaction support to log-structured file systems. Instead, the presented design adds techniques and performance characteristics of log-structured file systems to traditional B-trees and their standard transaction support, notably without adding a layer of indirection for locating B-tree nodes on disk. The result retains fine-granularity locking, full transactional ACID guarantees, fast search performance, etc. expected of a modern B-tree implementation, yet adds efficient transacted page relocation and large, high-bandwidth writes.",
"title": ""
},
{
"docid": "29c62dce09752ce0eee4ec9d1840fad0",
"text": "This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.",
"title": ""
},
{
"docid": "d1f3961959f11ce553237ef8941da86a",
"text": "Inspired by recent successes of deep learning in computer vision and speech recognition, we propose a novel framework to encode time series data as different types of images, namely, Gramian Angular Fields (GAF) and Markov Transition Fields (MTF). This enables the use of techniques from computer vision for classification. Using a polar coordinate system, GAF images are represented as a Gramian matrix where each element is the trigonometric sum (i.e., superposition of directions) between different time intervals. MTF images represent the first order Markov transition probability along one dimension and temporal dependency along the other. We used Tiled Convolutional Neural Networks (tiled CNNs) on 12 standard datasets to learn high-level features from individual GAF, MTF, and GAF-MTF images that resulted from combining GAF and MTF representations into a single image. The classification results of our approach are competitive with five stateof-the-art approaches. An analysis of the features and weights learned via tiled CNNs explains why the approach works.",
"title": ""
},
{
"docid": "5e5e2d038ae29b4c79c79abe3d20ae40",
"text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "99511c1267d396d3745f075a40a06507",
"text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …",
"title": ""
},
{
"docid": "553eb49b292b5edb4b53953701410a7d",
"text": "We review the most important mathematical models and algorithms developed for the exact solution of the one-dimensional bin packing and cutting stock problems, and experimentally evaluate, on state-of-the art computers, the performance of the main available software tools.",
"title": ""
},
{
"docid": "f01d7df02efb2f4114d93adf0da8fbf1",
"text": "This review summarizes the different methods of preparation of polymer nanoparticles including nanospheres and nanocapsules. The first part summarizes the basic principle of each method of nanoparticle preparation. It presents the most recent innovations and progresses obtained over the last decade and which were not included in previous reviews on the subject. Strategies for the obtaining of nanoparticles with controlled in vivo fate are described in the second part of the review. A paragraph summarizing scaling up of nanoparticle production and presenting corresponding pilot set-up is considered in the third part of the review. Treatments of nanoparticles, applied after the synthesis, are described in the next part including purification, sterilization, lyophilization and concentration. Finally, methods to obtain labelled nanoparticles for in vitro and in vivo investigations are described in the last part of this review.",
"title": ""
},
{
"docid": "1e1706e1bd58a562a43cc7719f433f4f",
"text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.",
"title": ""
},
{
"docid": "2fc7b4f4763d094462f13688b473d370",
"text": "Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses.",
"title": ""
},
{
"docid": "eb0a5d496dd9a427ab7d52416f70aab3",
"text": "Progress in habit theory can be made by distinguishing habit from frequency of occurrence, and using independent measures for these constructs. This proposition was investigated in three studies using a longitudinal, cross-sectional and experimental design on eating, mental habits and word processing, respectively. In Study 1, snacking habit and past snacking frequency independently predicted later snacking behaviour, while controlling for the theory of planned behaviour variables. Habit fully mediated the effect of past on later behaviour. In Study 2, habitual negative self-thinking and past frequency of negative self-thoughts independently predicted self-esteem and the presence of depressive and anxiety symptoms. In Study 3, habit varied as a function of experimentally manipulated task complexity, while behavioural frequency was held constant. Taken together, while repetition is necessary for habits to develop, these studies demonstrate that habit should not be equated with frequency of occurrence, but rather should be considered as a mental construct involving features of automaticity, such as lack of awareness, difficulty to control and mental efficiency.",
"title": ""
},
{
"docid": "de43054eb774df93034ffc1976a932b7",
"text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.",
"title": ""
},
{
"docid": "4d1a448569c55f919d9ce4da0928c89a",
"text": "The hit, break and cut classes of verbs are grammatically relevant in Kimaragang, as in English. The relevance of such classes for determining how arguments are expressed suggests that the meaning of a verb is composed of (a) systematic components of meaning (the EVENT TEMPLATE); and (b) idiosyncratic properties of the individual root. Assuming this approach to be essentially correct, we compare grammatical phenomena in Kimaragang which are sensitive to verb class membership with phenomena which are not class-sensitive. The tendency that emerges is that class-sensitive alternations do not seem to be affix-dependent, and are quite restricted in their ability to introduce new arguments into the argument structure. 1. Verbs of hitting and breaking in English This paper discusses the relationship between verbal semantics and clause structure in Kimaragang Dusun, an endangered Philippine-type language of northern Borneo. It builds on a classic paper by Charles Fillmore (1970), in which he distinguishes two classes of transitive verbs in English: “surface contact” verbs (e.g., hit, slap, strike, bump, stroke) vs. “change of state” verbs (e.g., break, bend, fold, shatter, crack). Fillmore shows that the members of each class share certain syntactic and semantic properties which distinguish them from members of the other class. He further argues that the correlation between these syntactic and semantic properties supports a view of lexical semantics under which the meaning of a verb is made up of two kinds of elements: (a) systematic components of meaning that are shared by an entire class; and (b) idiosyncratic components that are specific to the individual root. Only the former are assumed to be “grammatically relevant.” This basic insight has been foundational for a large body of subsequent work in the area of lexical semantics. One syntactic test that distinguishes hit verbs from break verbs in English is the “causative alternation”, which is systematically possible with break verbs (John broke the window vs. The 1 I would like to thank Jim Johansson, Farrell Ackerman and John Beavers for helpful discussion of these issues. Thanks also to Jim Johansson for giving me access to his field dictionary (Johansson, n.d.), the source of many of the Kimaragang examples in this paper. Special thanks are due to my primary language consultant, Janama Lantubon. Part of the research for this study was supported by NEH-NSF Documenting Endangered Languages fellowship no. FN-50027-07. The grammar of hitting, breaking and cutting in Kimaragang Dusun 2 window broke) but systematically impossible with hit verbs (John hit the window vs. *The window hit). A second test involves a kind of “possessor ascension”, a paraphrase in which the possessor of a body-part noun can be expressed as direct object. This paraphrase is grammatical with hit verbs (I hit his leg vs. I hit him on the leg) but not with break verbs (I broke his leg vs. *I broke him on the leg). A third diagnostic relates to the potential ambiguity of the passive participle. Participles of both classes take a verbal-eventive reading; but participles of break verbs also allow an adjectival-stative reading (the window is still broken) which is unavailable for participles of hit verbs (*the window is still hit). Semantically, the crucial difference between the two classes is that break verbs entail a result, specifically a “separation in [the] material integrity” of the patient (Hale and Keyser 1987). This entailment cannot be cancelled (e.g., I broke the window with a hammer; #it didn’t faze the window, but the hammer shattered). The hit verbs, in contrast, do not have this entailment (I hit the window with a hammer; it didn’t faze the window, but the hammer shattered). A second difference is that break verbs may impose selectional restrictions based on physical properties of the object (I {folded/?bent/ *broke/*shattered} the blanket) whereas hit verbs do not (I {hit/slapped/struck/beat} the blanket). Selectional restrictions of hit verbs are more likely to be based on physical properties of the instrument. In the years since 1970, these two classes of verbs have continued to be studied and discussed in numerous publications. Additional diagnostics have been identified, including the with/against alternation (examples 1–2; cf. Fillmore 1977:75); the CONATIVE alternation (Mary hit/broke the piñata vs. Mary hit/*broke at the piñata; Guerssel et al. 1985); and the Middle alternation (This glass breaks/*hits easily; Fillmore 1977, Hale and Keyser 1987). These tests and others are summarized in Levin (1993). (1) a. I hit the fence with the stick. b. I hit the stick against the fence. (2) a. I broke the window with the stick. b. #I broke the stick against the window. (not the same meaning!!) Another verb class that has received considerable attention in recent years is the cut class (e.g., Guerssel et al. 1985, Bohnemeyer 2007, Asifa et al. 2007). In this paper I will show that these same three classes (hit, break, cut) are distinguished by a number of grammatical and semantic properties in Kimaragang as well. Section 2 briefly introduces some of the basic assumptions that we will adopt about the structure of verb meanings. Section 3 discusses criteria that distinguish hit verbs from break verbs, and section 4 discusses the properties of the cut verbs. Section 5 introduces another test, which I refer to as the instrumental alternation, which exhibits a different pattern for each of the three classes. Section 6 discusses the tests themselves, trying to identify characteristic properties of the constructions that are sensitive to verb classes, and which distinguish these constructions from those that are not class-sensitive. 2. What do verb classes tell us? Fillmore‟s approach to the study of verb meanings has inspired a large volume of subsequent research; see for example Levin (1993), Levin and Rappaport Hovav (1995, 1998, 2005; henceforth L&RH), and references cited in those works. Much of this research is concerned with exploring the following hypotheses, which were already at least partially articulated in Fillmore (1970): The grammar of hitting, breaking and cutting in Kimaragang Dusun 3 a. Verb meanings are composed of two kinds of information. Some components of meaning are systematic, forming a kind of “event template”, while others are idiosyncratic, specific to that particular root. b. Only systematic components of meaning are “grammatically relevant”, more specifically, relevant to argument realization. c. Grammatically determined verb classes are sets of verbs that share the same template. The systematic aspects of meaning distinguish one class from another, while roots belonging to the same class are distinguished by features of their idiosyncratic meaning. Levin (1993) states: “[T]here is a sense in which the notion of verb class is an artificial construct. Verb classes arise because a set of verbs with one or more shared meaning components show similar behavior... The important theoretical construct is the meaning component, not the verb class...” Identifying semantically determined sets of verbs is thus a first step in understanding what elements of meaning are relevant for determining how arguments will be expressed. Notice that the three prototypical verbs under consideration here (hit, beak, cut) are all transitive verbs, and all three select the same set of semantic roles: agent, patient, plus optional instrument. Thus the event template that defines each class, and allows us to account for the grammatical differences summarized above, must be more than a simple list of semantic roles. In addition to identifying grammatically relevant components of meaning, the study of verb classes is important as a means of addressing the following questions: (a) What is the nature of the “event template”, and how should it be represented? and (b) What morpho-syntactic processes or constructions are valid tests for “grammatical relevance” in the sense intended above? Clearly these three issues are closely inter-related, and cannot be fully addressed in isolation from each other. However, in this paper I will focus primarily on the third question, which I will re-state in the following way: What kinds of grammatical constructions or tests are relevant for identifying semantically-based verb classes? 3. Verbs of hitting and breaking in Kimaragang 3.1 Causative-inchoative alternation Kimaragang is structurally very similar to the languages of the central Philippines. In particular, Kimaragang exhibits the rich Philippine-type voice system in which the semantic role of the subject (i.e., the NP marked for nominative case) is indicated by the voice affixation of the verb. 2 In the Active Voice, an additional “transitivity” prefix occurs on transitive verbs; this prefix is lacking on intransitive verbs. 3 Many verbal roots occur in both transitive and intransitive forms, as illustrated in (3) with the root patay „die; kill‟. In the most productive pattern, and the one of interest to us here, the intransitive form has an inchoative (change of state) meaning while the transitive form has a causative meaning. However, it is important to note that there is no causative morpheme present in these forms (morphological causatives are marked by a different prefix, po-, as discussed in section 6.1). 2 See Kroeger (2005) for a more detailed summary with examples. 3 For details see Kroeger (1996); Kroeger & Johansson (2005). The grammar of hitting, breaking and cutting in Kimaragang Dusun 4 (3) a. Minamatay(<in>m-poN-patay) oku do tasu. 4 <PST>AV-TR-die 1sg.NOM ACC dog „I killed a dog.‟ b. Minatay(<in>m-patay) it tasu. <PST>AV-die NOM dog „The dog died.‟ Virtually all break-type roots allow both the causative and inchoative forms, as illustrated in (6– 7); but hit-type roots generally occur only in the transitive form. Thus just as in English, the causative alternation is highly productive with ",
"title": ""
},
{
"docid": "7a17ff6cbc7fcbdb2c867a23dc1be591",
"text": "Particle swarm optimization has become a common heuristic technique in the optimization community, with many researchers exploring the concepts, issues, and applications of the algorithm. In spite of this attention, there has as yet been no standard definition representing exactly what is involved in modern implementations of the technique. A standard is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community",
"title": ""
},
{
"docid": "a0b55bafeac6f681c758ccb45d54f6e5",
"text": "( 南京理工大学 计算机学院 江苏 南京 210094 ) E_mail:sj012328@163.com 摘要:基于学习分类器集成算法,设计了在动态环境下的适应度函数,在理论上推导并证明了集成算法的收敛性,为本文提出的路 径规划算法的收敛提供了理论保证。仿真实验结果也表明遗传算法和学习分类器结合用于机器人的路径规划是收敛的,遗传算法的 早熟收敛和收敛速度慢两大难题也得到很大改善。 关键词:路径规划 机器人 学习分类器 收敛性 Research on convergence of robot path planning based on LCS Jie Shao Jing yu Yang (School of computer Science Nanjing University of Science and Technology Nanjing 210094 ) Abstract: A path planning algorithm of robot is proposed based on ensemble algorithm of the learning classifier system, which design fitness function in dynamic environment. The paper derived and proved that ensemble algorithm is convergence and provided a theoretical guarantee for the path planning algorithm. Simulation results also showed that genetic algorithms and learning classifier system combination for robot path planning is effective. Two major problems of the GA premature convergence and slow convergence have been significantly improved. Keyword: Path Planning Robot Learning classifier system convergence",
"title": ""
},
{
"docid": "8966f87b2441cc2c348e25e3503e766c",
"text": "Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.",
"title": ""
},
{
"docid": "8296ce0143992c7513051c70758541be",
"text": "This artic,le introduces Adaptive Resonance Theor) 2-A (ART 2-A), an efjCicient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architect~~rc, hut at a speed two to three orders of magnitude fbster. Analysis and simulations show how’ the ART 2-A systems correspond to ART 2 rivnamics at both the fast-learn limit and at intermediate learning rate.r. Intermediate ieurning rates permit fust commitment of category nodes hut slow recoding, analogous to properties of word frequency effects. encoding specificity ef@cts, and episodic memory. Better noise tolerunce is hereby achieved ti’ithout a loss of leurning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes pructical the use of ART 2 modules in large scale neural computation. Keywords-Neural networks, Pattern recognition. Category formation. Fast learning, Adaptive resonance.",
"title": ""
},
{
"docid": "65cdf37698a552944fca3b9f4cb2d6cc",
"text": "The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997) permits the calculation of both traditional IQ and index scores. However, if only the subtests constituting the index scores are administered, especially those yielding the Verbal Comprehension and Perceptual Organization Indexes, there is no equivalent measure of Full Scale IQ. Following the procedure for calculating a General Ability Index (GAI; A. Prifitera, L. G. Weiss, & D. H. Saklofske, 1998) for the Wechsler Intelligence Scale for Children--Third Edition (D. Wechsler, 1991), GAI normative tables for the WAIS-III standardization sample are reported here.",
"title": ""
},
{
"docid": "16c522d458ed5df9d620e8255886e69e",
"text": "Linked Stream Data has emerged as an effort to represent dynamic, time-dependent data streams following the principles of Linked Data. Given the increasing number of available stream data sources like sensors and social network services, Linked Stream Data allows an easy and seamless integration, not only among heterogenous stream data, but also between streams and Linked Data collections, enabling a new range of real-time applications. This tutorial gives an overview about Linked Stream Data processing. It describes the basic requirements for the processing, highlighting the challenges that are faced, such as managing the temporal aspects and memory overflow. It presents the different architectures for Linked Stream Data processing engines, their advantages and disadvantages. The tutorial also reviews the state of the art Linked Stream Data processing systems, and provide a comparison among them regarding the design choices and overall performance. A short discussion of the current challenges in open problems is given at the end.",
"title": ""
}
] |
scidocsrr
|
d20444f2aeb0bcbc25835726b89a2fb1
|
Better cross company defect prediction
|
[
{
"docid": "dc66c80a5031c203c41c7b2908c941a3",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
},
{
"docid": "697580dda38c9847e9ad7c6a14ad6cd0",
"text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.",
"title": ""
}
] |
[
{
"docid": "56d9b47d1860b5a80c62da9f75b6769d",
"text": "Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users.",
"title": ""
},
{
"docid": "0488511dc0641993572945e98a561cc7",
"text": "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.",
"title": ""
},
{
"docid": "c8977fe68b265b735ad4261f5fe1ec25",
"text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.",
"title": ""
},
{
"docid": "36357f48cbc3ed4679c679dcb77bdd81",
"text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.",
"title": ""
},
{
"docid": "fb8201417666d992d508538583c5713f",
"text": "We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.",
"title": ""
},
{
"docid": "1dee93ec9e8de1cf365534581fb19623",
"text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "a9069e2560b78e97bf8e76889041a201",
"text": "We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent’s body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent’s own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the body— touch sensors, proprioception and vestibular information—leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.",
"title": ""
},
{
"docid": "d04e975e48bd385a69fdf58c93103fd3",
"text": "In this paper we will present a low-phase-noise wide-tuning-range oscillator suitable for scaled CMOS processes. It switches between the two resonant modes of a high-order LC resonator that consists of two identical LC tanks coupled by capacitor and transformer. The mode switching method does not add lossy switches to the resonator and thus doubles frequency tuning range without degrading phase noise performance. Moreover, the coupled resonator leads to 3 dB lower phase noise than a single LC tank, which provides a way of achieving low phase noise in scaled CMOS process. Finally, the novel way of using inductive and capacitive coupling jointly decouples frequency separation and tank impedances of the two resonant modes, and makes it possible to achieve balanced performance. The proposed structure is verified by a prototype in a low power 65 nm CMOS process, which covers all cellular bands with a continuous tuning range of 2.5-5.6 GHz and meets all stringent phase noise specifications of cellular standards. It uses a 0.6 V power supply and achieves excellent phase noise figure-of-merit (FoM) of 192.5 dB at 3.7 GHz and >; 188 dB across the entire tuning range. This demonstrates the possibility of achieving low phase noise and wide tuning range at the same time in scaled CMOS processes.",
"title": ""
},
{
"docid": "5fd840b020b69c9588faf575f8079e83",
"text": "We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-preserving photo sharing that encrypts the significant JPEG coefficients to make images unrecognizable by humans. We empirically show how to train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques.",
"title": ""
},
{
"docid": "4a1a1b3012f2ce941cc532a55b49f09b",
"text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.",
"title": ""
},
{
"docid": "0c43c0dbeaff9afa0e73bddb31c7dac0",
"text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.",
"title": ""
},
{
"docid": "46bc17ab45e11b5c9c07200a60db399f",
"text": "Locality-sensitive hashing (LSH) is a basic primitive in several large-scale data processing applications, including nearest-neighbor search, de-duplication, clustering, etc. In this paper we propose a new and simple method to speed up the widely-used Euclidean realization of LSH. At the heart of our method is a fast way to estimate the Euclidean distance between two d-dimensional vectors; this is achieved by the use of randomized Hadamard transforms in a non-linear setting. This decreases the running time of a (k, L)-parameterized LSH from O(dkL) to O(dlog d + kL). Our experiments show that using the new LSH in nearest-neighbor applications can improve their running times by significant amounts. To the best of our knowledge, this is the first running time improvement to LSH that is both provable and practical.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "2f88356c3a1ab60e3dd084f7d9630c70",
"text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.",
"title": ""
},
{
"docid": "6341eaeb32d0e25660de6be6d3943e81",
"text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.",
"title": ""
},
{
"docid": "4bce473bb65dfc545d5895c7edb6cea6",
"text": "mathematical framework of the population equations. It will turn out that the results are – of course – consistent with those derived from the population equation. We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large (N →∞). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at t̂ = 0. When will the neurons fire again? Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons θ = ui(t) = η(t− t̂i) + ∑",
"title": ""
},
{
"docid": "17caec370a97af736d948123f9e7be73",
"text": "Multiple-purpose forensics has been attracting increasing attention worldwide. However, most of the existing methods based on hand-crafted features often require domain knowledge and expensive human labour and their performances can be affected by factors such as image size and JPEG compression. Furthermore, many anti-forensic techniques have been applied in practice, making image authentication more difficult. Therefore, it is of great importance to develop methods that can automatically learn general and robust features for image operation detectors with the capability of countering anti-forensics. In this paper, we propose a new convolutional neural network (CNN) approach for multi-purpose detection of image manipulations under anti-forensic attacks. The dense connectivity pattern, which has better parameter efficiency than the traditional pattern, is explored to strengthen the propagation of general features related to image manipulation detection. When compared with three state-of-the-art methods, experiments demonstrate that the proposed CNN architecture can achieve a better performance (i.e., with a 11% improvement in terms of detection accuracy under anti-forensic attacks). The proposed method can also achieve better robustness against JPEG compression with maximum improvement of 13% on accuracy under low-quality JPEG compression.",
"title": ""
},
{
"docid": "36e238fa3c85b41a062d08fd9844c9be",
"text": "Building generalization is a difficult operation due to the complexity of the spatial distribution of buildings and for reasons of spatial recognition. In this study, building generalization is decomposed into two steps, i.e. building grouping and generalization execution. The neighbourhood model in urban morphology provides global constraints for guiding the global partitioning of building sets on the whole map by means of roads and rivers, by which enclaves, blocks, superblocks or neighbourhoods are formed; whereas the local constraints from Gestalt principles provide criteria for the further grouping of enclaves, blocks, superblocks and/or neighbourhoods. In the grouping process, graph theory, Delaunay triangulation and the Voronoi diagram are employed as supporting techniques. After grouping, some useful information, such as the sum of the building’s area, the mean separation and the standard deviation of the separation of buildings, is attached to each group. By means of the attached information, an appropriate operation is selected to generalize the corresponding groups. Indeed, the methodology described brings together a number of welldeveloped theories/techniques, including graph theory, Delaunay triangulation, the Voronoi diagram, urban morphology and Gestalt theory, in such a way that multiscale products can be derived.",
"title": ""
},
{
"docid": "f4fb4638bb8bc6ae551dc729b6bcea2e",
"text": "mark of facial attractiveness.1,2 Skeletal asymmetries generally require surgical intervention to improve facial esthetics and correct any associated malocclusions. The classic approach in volves a presurgical phase of orthodontics, during which dental compensations are eliminated, and a postsurgical phase to refine the occlusion. The presurgical phase can be lengthy, involving tooth decompensations that often exaggerate the existing dentofacial deformities.3 Skeletal anchorage now makes it possible to eliminate the presurgical orthodontic phase and to correct minor surgical inaccuracies and relapse tendencies after surgery. In addition to a significant reduction in treatment time, this approach offers immediate gratification in the correction of facial deformities,2 which can translate into better patient compliance with elastic wear and appointments. Another reported advantage is the elimination of soft-tissue imbalances that might interfere with ortho dontic tooth movements. This article describes a “surgery first” approach in a patient with complex dentofacial asymmetry and Class III malocclusion.",
"title": ""
}
] |
scidocsrr
|
01706b96302e253b3ec0ab8e25b13449
|
Where you Instagram?: Associating Your Instagram Photos with Points of Interest
|
[
{
"docid": "bd33ed4cde24e8ec16fb94cf543aad8e",
"text": "Users' locations are important to many applications such as targeted advertisement and news recommendation. In this paper, we focus on the problem of profiling users' home locations in the context of social network (Twitter). The problem is nontrivial, because signals, which may help to identify a user's location, are scarce and noisy. We propose a unified discriminative influence model, named as UDI, to solve the problem. To overcome the challenge of scarce signals, UDI integrates signals observed from both social network (friends) and user-centric data (tweets) in a unified probabilistic framework. To overcome the challenge of noisy signals, UDI captures how likely a user connects to a signal with respect to 1) the distance between the user and the signal, and 2) the influence scope of the signal. Based on the model, we develop local and global location prediction methods. The experiments on a large scale data set show that our methods improve the state-of-the-art methods by 13%, and achieve the best performance.",
"title": ""
}
] |
[
{
"docid": "37efaf5cbd7fb400b713db6c7c980d76",
"text": "Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.",
"title": ""
},
{
"docid": "0de9ea0f7ee162f1def6ee9b95ea9ba3",
"text": "While much exciting progress is being made in mobile visual search, one important question has been left unexplored in all current systems. When the first query fails to find the right target (up to 50% likelihood), how should the user form his/her search strategy in the subsequent interaction? In this paper, we propose a novel Active Query Sensing system to suggest the best way for sensing the surrounding scenes while forming the second query for location search. We accomplish the goal by developing several unique components -- an offline process for analyzing the saliency of the views associated with each geographical location based on score distribution modeling, predicting the visual search precision of individual views and locations, estimating the view of an unseen query, and suggesting the best subsequent view change. Using a scalable visual search system implemented over a NYC street view data set (0.3 million images), we show a performance gain as high as two folds, reducing the failure rate of mobile location search to only 12% after the second query. This work may open up an exciting new direction for developing interactive mobile media applications through innovative exploitation of active sensing and query formulation.",
"title": ""
},
{
"docid": "c6054c39b9b36b5d446ff8da3716ec30",
"text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "fe35799be26543a90b4d834e41b492eb",
"text": "Social Web stands for the culture of participation and collaboration on the Web. Structures emerge from social interactions: social tagging enables a community of users to assign freely chosen keywords to Web resources. The structure that evolves from social tagging is called folksonomy and recent research has shown that the exploitation of folksonomy structures is beneficial to information systems. In this thesis we propose models that better capture usage context of social tagging and develop two folksonomy systems that allow for the deduction of contextual information from tagging activities. We introduce a suite of ranking algorithms that exploit contextual information embedded in folksonomy structures and prove that these contextsensitive ranking algorithms significantly improve search in Social Web systems. We setup a framework of user modeling and personalization methods for the Social Web and evaluate this framework in the scope of personalized search and social recommender systems. Extensive evaluation reveals that our context-based user modeling techniques have significant impact on the personalization quality and clearly improve regular user modeling approaches. Finally, we analyze the nature of user profiles distributed on the Social Web, implement a service that supports cross-system user modeling and investigate the impact of cross-system user modeling methods on personalization. In different experiments we prove that our cross-system user modeling strategies solve cold-start problems in social recommender systems and that intelligent re-use of external profile information improves the recommendation quality also beyond the cold-start.",
"title": ""
},
{
"docid": "a8164a657a247761147c6012fd5442c9",
"text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.",
"title": ""
},
{
"docid": "3a84567c28d6a59271334594307263a5",
"text": "Comprehension difficulty was rated for metaphors of the form Noun1-is-aNoun2; in addition, participants completed frames of the form Noun1-is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analysis. The model matched participants’ interpretations for both easy and difficult metaphors. When interpreting easy metaphors, both the participants and the model generated highly consistent responses. When interpreting difficult metaphors, both the participants and the model generated disparate responses.",
"title": ""
},
{
"docid": "d9440b9ba13c1c5ccae80b0d513b5330",
"text": "Endogenous cannabinoids play an important role in the physiology and behavioral expression of stress responses. Activation of the hypothalamic-pituitary-adrenal (HPA) axis, including the release of glucocorticoids, is the fundamental hormonal response to stress. Endocannabinoid (eCB) signaling serves to maintain HPA-axis homeostasis, by buffering basal activity as well as by mediating glucocorticoid fast feedback mechanisms. Following chronic stressor exposure, eCBs are also involved in physiological and behavioral habituation processes. Behavioral consequences of stress include fear and stress-induced anxiety as well as memory formation in the context of stress, involving contextual fear conditioning and inhibitory avoidance learning. Chronic stress can also lead to depression-like symptoms. Prominent in these behavioral stress responses is the interaction between eCBs and the HPA-axis. Future directions may differentiate among eCB signaling within various brain structures/neuronal subpopulations as well as between the distinct roles of the endogenous cannabinoid ligands. Investigation into the role of the eCB system in allostatic states and recovery processes may give insight into possible therapeutic manipulations of the system in treating chronic stress-related conditions in humans.",
"title": ""
},
{
"docid": "3256b2050c603ca16659384a0e98a22c",
"text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.",
"title": ""
},
{
"docid": "3d4afb9ed09fbb6200175e2440b56755",
"text": "A brief account is given of the discovery of abscisic acid (ABA) in roots and root caps of higher plants as well as the techniques by which ABA may be demonstrated in these tissues. The remainder of the review is concerned with examining the rôle of ABA in the regulation of root growth. In this regard, it is well established that when ABA is supplied to roots their elongation is usually inhibited, although at low external concentrations a stimulation of growth may also be found. Fewer observations have been directed at exploring the connection between root growth and the level of naturally occurring, endogenous ABA. Nevertheless, the evidence here also suggests that ABA is an inhibitory regulator of root growth. Moreover, ABA appears to be involved in the differential growth that arises in response to a gravitational stimulus. Recent reports that deny a rôle for ABA in root gravitropism are considered inconclusive. The response of roots to osmotic stress and the changes in ABA levels which ensue, are summarised; so are the interrelations between ABA and other hormones, particularly auxin (e.g. indoleacetic acid); both are considered in the context of the root growth and development. Quantitative changes in auxin and ABA levels may together provide the root with a flexible means of regulating its growth.",
"title": ""
},
{
"docid": "557b718f65e68f3571302e955ddb74d7",
"text": "Synthetic aperture radar (SAR) has been an unparalleled tool in cloudy and rainy regions as it allows observations throughout the year because of its all-weather, all-day operation capability. In this paper, the influence of Wenchuan Earthquake on the Sichuan Giant Panda habitats was evaluated for the first time using SAR interferometry and combining data from C-band Envisat ASAR and L-band ALOS PALSAR data. Coherence analysis based on the zero-point shifting indicated that the deforestation process was significant, particularly in habitats along the Min River approaching the epicenter after the natural disaster, and as interpreted by the vegetation deterioration from landslides, avalanches and debris flows. Experiments demonstrated that C-band Envisat ASAR data were sensitive to vegetation, resulting in an underestimation of deforestation; in contrast, L-band PALSAR data were capable of evaluating the deforestation process owing to a better penetration and the significant coherence gain on damaged forest areas. The percentage of damaged forest estimated by PALSAR decreased from 20.66% to 17.34% during 2009–2010, implying an approximate 3% recovery rate of forests in the earthquake OPEN ACCESS Remote Sens. 2014, 6 6284 impacted areas. This study proves that long-wavelength SAR interferometry is promising for rapid assessment of disaster-induced deforestation, particularly in regions where the optical acquisition is constrained.",
"title": ""
},
{
"docid": "c0b30475f78acefae1c15f9f5d6dc57b",
"text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",
"title": ""
},
{
"docid": "bbfc488e55fe2dfaff2af73a75c31edd",
"text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.",
"title": ""
},
{
"docid": "ceb4563a83fc49e5aceac7b56a8d63c0",
"text": "PURPOSE\nThe literature has shown that anterior cruciate ligament (ACL) tear rates vary by gender, by sport, and in response to injury-reduction training programs. However, there is no consensus as to the magnitudes of these tear rates or their variations as a function of these variables. For example, the female-male ACL tear ratio has been reported to be as high as 9:1. Our purpose was to apply meta-analysis to the entire applicable literature to generate accurate estimates of the true incidences of ACL tear as a function of gender, sport, and injury-reduction training.\n\n\nMETHODS\nA PubMed literature search was done to identify all studies dealing with ACL tear incidence. Bibliographic cross-referencing was done to identify additional articles. Meta-analytic principles were applied to generate ACL incidences as a function of gender, sport, and prior injury-reduction training.\n\n\nRESULTS\nFemale-male ACL tear incidences ratios were as follows: basketball, 3.5; soccer, 2.67; lacrosse, 1.18; and Alpine skiing, 1.0. The collegiate soccer tear rate was 0.32 for female subjects and 0.12 for male subjects. For basketball, the rates were 0.29 and 0.08, respectively. The rate for recreational Alpine skiers was 0.63, and that for experts was 0.03, with no gender variance. The two volleyball studies had no ACL tears. Training reduced the ACL tear incidence in soccer by 0.24 but did not reduce it at all in basketball.\n\n\nCONCLUSIONS\nFemale subjects had a roughly 3 times greater incidence of ACL tears in soccer and basketball versus male subjects. Injury-reduction programs were effective for soccer but not basketball. Recreational Alpine skiers had the highest incidences of ACL tear, whereas expert Alpine skiers had the lowest incidences. Volleyball may in fact be a low-risk sport rather than a high-risk sport. Alpine skiers and lacrosse players had no gender difference for ACL tear rate. Year-round female athletes who play soccer and basketball have an ACL tear rate of approximately 5%.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic case series.",
"title": ""
},
{
"docid": "a6aa10b5adcf3241157919cb0e6863e9",
"text": "Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.",
"title": ""
},
{
"docid": "eccd1b3b8acbf8426d7ccb7933e0bd0e",
"text": "We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.",
"title": ""
},
{
"docid": "5a416fb88c3f5980989f7556fb19755c",
"text": "Cloud computing helps to share data and provide many resources to users. Users pay only for those resources as much they used. Cloud computing stores the data and distributed resources in the open environment. The amount of data storage increases quickly in open environment. So, load balancing is a main challenge in cloud environment. Load balancing is helped to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. It helps in proper utilization of resources .It also improve the performance of the system. Many existing algorithms provide load balancing and better resource utilization. There are various types load are possible in cloud computing like memory, CPU and network load. Load balancing is the process of finding overloaded nodes and then transferring the extra load to other nodes.",
"title": ""
},
{
"docid": "64e26b00bba3bba8d2ab77b44f049c58",
"text": "The transmission properties of a folded corrugated substrate integrated waveguide (FCSIW) and a proposed half-mode FCSIW is investigated. For the same cut-off frequency, these structures have similar performance to CSIW and HMCSIW respectively, but with significantly reduced width. The top wall is isolated from the bottom wall at DC thereby permitting active devices to be connected directly to, and biased through them. Arrays of quarter-wave stubs above the top wall allow TE1,0 mode conduction currents to flow between the top and side walls. Measurements and simulations of waveguides designed to have a nominal cut-off frequency of 3 GHz demonstrate the feasibility of these compact waveguides.",
"title": ""
},
{
"docid": "bd817e69a03da1a97e9c412b5e09eb33",
"text": "The emergence of carbapenemase producing bacteria, especially New Delhi metallo-β-lactamase (NDM-1) and its variants, worldwide, has raised amajor public health concern. NDM-1 hydrolyzes a wide range of β-lactam antibiotics, including carbapenems, which are the last resort of antibiotics for the treatment of infections caused by resistant strain of bacteria. In this review, we have discussed bla NDM-1variants, its genetic analysis including type of specific mutation, origin of country and spread among several type of bacterial species. Wide members of enterobacteriaceae, most commonly Escherichia coli, Klebsiella pneumoniae, Enterobacter cloacae, and gram-negative non-fermenters Pseudomonas spp. and Acinetobacter baumannii were found to carry these markers. Moreover, at least seventeen variants of bla NDM-type gene differing into one or two residues of amino acids at distinct positions have been reported so far among different species of bacteria from different countries. The genetic and structural studies of these variants are important to understand the mechanism of antibiotic hydrolysis as well as to design new molecules with inhibitory activity against antibiotics. This review provides a comprehensive view of structural differences among NDM-1 variants, which are a driving force behind their spread across the globe.",
"title": ""
},
{
"docid": "c346ddfd1247d335c1a45d094ae2bb60",
"text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.",
"title": ""
},
{
"docid": "2595c67531f0da4449f5914cac3488a7",
"text": "In this paper we present a novel interaction metaphor for handheld projectors we label MotionBeam. We detail a number of interaction techniques that utilize the physical movement of a handheld projector to better express the motion and physicality of projected objects. Finally we present the first iteration of a projected character design that uses the MotionBeam metaphor for user interaction.",
"title": ""
}
] |
scidocsrr
|
25afdb1b2b378c785549be2a014bb21a
|
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion
|
[
{
"docid": "32598fba1f5e7507113d89ad1978e867",
"text": "Good motion data is costly to create. Such an expense often makes the reuse of motion data through transformation and retargetting a more attractive option than creating new motion from scratch. Reuse requires the ability to search automatically and efficiently a growing corpus of motion data, which remains a difficult open problem. We present a method for quickly searching long, unsegmented motion clips for subregions that most closely match a short query clip. Our search algorithm is based on a weighted PCA-based pose representation that allows for flexible and efficient pose-to-pose distance calculations. We present our pose representation and the details of the search algorithm. We evaluate the performance of a prototype search application using both synthetic and captured motion data. Using these results, we propose ways to improve the application's performance. The results inform a discussion of the algorithm's good scalability characteristics.",
"title": ""
}
] |
[
{
"docid": "f03e6476b531ca1ffc2967158faabe58",
"text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. To strive for sustainability under today's intense business competition, organisations apply technology roadmapping (TRM) as a strategic planning tool to align their technology strategies with business strategies. Many organisations desire to integrate TRM into an ongoing strategic planning process. The consequences of TRM implementation can lead to some changes in the business process, organisational structure, or even working culture. Applying a change management approach will help organisations to understand the basic elements that an individual needs so that some challenges can be addressed in advance before adopting the TRM process. This paper proposes a practical guideline to implement technology roadmapping along with a case example.",
"title": ""
},
{
"docid": "ca75798a9090810682f99400f6a8ff4e",
"text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.",
"title": ""
},
{
"docid": "a6e35b743c2cfd2cd764e5ad83decaa7",
"text": "An e-vendor’s website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: 1) customer trust in the e-vendor and 2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-à-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers’ purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.",
"title": ""
},
{
"docid": "cf8dfff6a026fc3bb4248cd813af9947",
"text": "We consider a multi agent optimization problem where a network of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We propose a fully distributed broadcast-based Alternating Direction Method of Multipliers (ADMM), in which each agent broadcasts the outcome of his local processing to all his neighbors. We show that both the objective function values and the feasibility violation converge with rate O(1/T), where T is the number of iterations. This improves upon the O(1/√T) convergence rate of subgradient-based methods. We also characterize the effect of network structure and the choice of communication matrix on the convergence speed. Because of its broadcast nature, the storage requirements of our algorithm are much more modest compared to the distributed algorithms that use pairwise communication between agents.",
"title": ""
},
{
"docid": "a96209a2f6774062537baff5d072f72f",
"text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.",
"title": ""
},
{
"docid": "e2718438e96defc0d6e07ccb50c5c089",
"text": "In this paper, rotor slits on the rotor's outer circumference is adopted to reduce certain harmonic components of radial forces and, hence, acoustic noise and vibration in an interior permanent magnet synchronous machine (IPMSM). The 48th order of harmonic component for the radial force is found to be responsible for the noise and vibration problem in the studied motor. For this purpose, the influential natural frequencies, speed range, and order of harmonic components for radial force are analyzed in a systematic way. A set of design procedures is formulated to find the proper locations for the slits circumferentially. Two base designs have been identified in electromagnetic analysis to reduce the radial force component and, hence, vibration. The features for both base models are combined to create a hybridized model. Then, the operating conditions, such as speed, current, and excitation angle are investigated on the hybridized model, in the high-dimensional analysis. At influential speed region, the hybridized model achieved up to 70% drop of 48th order harmonic for radial force in a wide operating range, and the highest drop goes up to 82.5%. Torque drop in the influential speed ranges from 2.5% to 5%.",
"title": ""
},
{
"docid": "b1e039673d60defd9b8699074235cf1b",
"text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.",
"title": ""
},
{
"docid": "096acc73f3f7801e518711a58c2ee5f5",
"text": "In this article, the authors present a life-course perspective on crime and a critique of the developmental criminology paradigm. Their fundamental argument is that persistent offending and desistance—or trajectories of crime—can be meaningfully understood within the same theoretical framework, namely, a revised agegraded theory of informal social control. The authors examine three major issues. First, they analyze data that undermine the idea that developmentally distinct groups of offenders can be explained by unique causal processes. Second, they revisit the concept of turning points from a time-varying view of key life events. Third, they stress the overlooked importance of human agency in the development of crime. The authors’ life-course theory envisions development as the constant interaction between individuals and their environment, coupled with random developmental noise and a purposeful human agency that they distinguish from rational choice. Contrary to influential developmental theories in criminology, the authors thus conceptualize crime as an emergent process reducible neither to the individual nor the environment.",
"title": ""
},
{
"docid": "f700b168c98d235a7fb76581cc24717f",
"text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.",
"title": ""
},
{
"docid": "8c221ad31eda07f1628c3003a8c12724",
"text": "This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.",
"title": ""
},
{
"docid": "4e8c39eaa7444158a79573481b80a77f",
"text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.",
"title": ""
},
{
"docid": "d18d4780cc259da28da90485bd3f0974",
"text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "32c405ebed87b4e1ca47cd15b7b9b61b",
"text": "Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Achieving the potential of these cameras requires efficiently analyzing the live videos in realtime. We describe VideoStorm, a video analytics system that processes thousands of video analytics queries on live video streams over large clusters. Given the high costs of vision processing, resource management is crucial. We consider two key characteristics of video analytics: resource-quality tradeoff with multi-dimensional configurations, and variety in quality and lag goals. VideoStorm’s offline profiler generates query resourcequality profile, while its online scheduler allocates resources to queries to maximize performance on quality and lag, in contrast to the commonly used fair sharing of resources in clusters. Deployment on an Azure cluster of 101 machines shows improvement by as much as 80% in quality of real-world queries and 7× better lag, processing video from operational traffic cameras.",
"title": ""
},
{
"docid": "ff345d732a273577ca0f965b92e1bbbd",
"text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.",
"title": ""
},
{
"docid": "a4957c88aee24ee9223afea8b01a8a62",
"text": "This study examined smartphone user behaviors and their relation to self-reported smartphone addiction. Thirty-four users who did not own smartphones were given instrumented iPhones that logged all phone use over the course of the year-long study. At the conclusion of the study, users were asked to rate their level of addiction to the device. Sixty-two percent agreed or strongly agreed that they were addicted to their iPhones. These users showed differentiated smartphone use as compared to those users who did not indicate an addiction. Addicted users spent twice as much time on their phone and launched applications much more frequently (nearly twice as often) as compared to the non-addicted user. Mail, Messaging, Facebook and the Web drove this use. Surprisingly, Games did not show any difference between addicted and nonaddicted users. Addicted users showed significantly lower time-per-interaction than did non-addicted users for Mail, Facebook and Messaging applications. One addicted user reported that his addiction was problematic, and his use data was beyond three standard deviations from the upper hinge. The implications of the relationship between the logged and self-report data are discussed.",
"title": ""
},
{
"docid": "b16992ec2416b420b2115037c78cfd4b",
"text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.",
"title": ""
},
{
"docid": "45719c2127204b4eb169fccd2af0bf82",
"text": "A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.",
"title": ""
},
{
"docid": "fef5e04bf8ddb05dfd02f10c7862ce6b",
"text": "With the rise of computer networks in the past decades, the sp read of distributed applications with components across multiple machines, and with new notions such as mobile code, there has been a need for formal methods to model and reason about concurrency and mobility. The study of sequ ential computations has been based on notions such as Turing machines, recursive functions, the -calculus, all equivalent formalisms capturing the essenc e of sequential computations. Unfortunately, for concurrent programs, th eories for sequential computation are not enough. Many programs are not simply programs that compute a result and re turn it to the user, but rather interact with other programs, and even move from machine to machine. Process calculi are an attempt at getting a formal foundatio n based on such ideas. They emerged from the work of Hoare [4] and Milner [6] on models of concurrency. These calc uli are meant to model systems made up of processes communicating by exchanging values across channels. They a llow for the dynamic creation and removal of processes, allowing the modelling of dynamic systems. A typical proces s calculus in that vein is CCS [6, 7]. The -calculus extends CCS with the ability to create and remove communicat ion links between processes, a new form of dynamic behaviour. By allowing links to be created and deleted, it is po sible to model a form of mobility, by identifying the position of a process by its communication links. This book, “The -calculus: A Theory of Mobile Processes”, by Davide Sangior gi and David Walker, is a in-depth study of the properties of the -calculus and its variants. In a sense, it is the logical foll owup to the recent introduction to concurrency and the -calculus by Milner [8], reviewed in SIGACT News, 31(4), Dec ember 2000. What follows is a whirlwind introduction to CCS and the -calculus. It is meant as a way to introduce the notions discussed in much more depth by the book under review. Let us s tart with the basics. CCS provides a syntax for writing processes. The syntax is minimalist, in the grand tradition of foundational calculi such as the -calculus. Processes perform actions, which can be of three forms: the sending of a message over channel x (written x), the receiving of a message over channel x (written x), and internal actions (written ), the details of which are unobservable. Send and receive actions are called synchronizationactions, since communication occurs when the correspondin g processes synchronize. Let stand for actions, including the internal action , while we reserve ; ; : : : for synchronization actions. 1 Processes are written using the following syntax: P ::= Ahx1; : : : ; xki jXi2I i:Pi j P1jP2 j x:P We write 0 for the empty summation (when I = ;). The idea behind process expressions is simple. The proces s 0 represents the process that does nothing and simply termina tes. A process of the form :P awaits to synchronize with a process of the form :Q, after which the processes continue as process P andQ respectively. A generalization 1In the literature, the actions of CCS are often given a much mo re abstract interpretation, as simply names and co-names. T he send/receive interpretation is useful when one moves to the -calculus.",
"title": ""
},
{
"docid": "e9ba4e76a3232e25233a4f5fe206e8ba",
"text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.",
"title": ""
},
{
"docid": "a0594bdeeafdbcc6e2e936cd025407e0",
"text": "[Purpose] The aim of this study was to compare the effects of \"McGill stabilization exercises\" and \"conventional physiotherapy\" on pain, functional disability and active back flexion and extension range of motion in patients with chronic non-specific low back pain. [Subjects and Methods] Thirty four patients with chronic non-specific low back pain were randomly assigned to McGill stabilization exercises group (n=17) and conventional physiotherapy group (n=17). In both groups, patients performed the corresponding exercises for six weeks. The visual analog scale (VAS), Quebec Low Back Pain Disability Scale Questionnaire and inclinometer were used to measure pain, functional disability, and active back flexion and extension range of motion, respectively. [Results] Statistically significant improvements were observed in pain, functional disability, and active back extension range of motion in McGill stabilization exercises group. However, active back flexion range of motion was the only clinical symptom that statistically increased in patients who performed conventional physiotherapy. There was no significant difference between the clinical characteristics while compared these two groups of patients. [Conclusion] The results of this study indicated that McGill stabilization exercises and conventional physiotherapy provided approximately similar improvement in pain, functional disability, and active back range of motion in patients with chronic non-specific low back pain. However, it appears that McGill stabilization exercises provide an additional benefit to patients with chronic non-specific low back, especially in pain and functional disability improvement.",
"title": ""
}
] |
scidocsrr
|
cf1573327854c91a71912e9f8b5a2366
|
Visual attention analysis and prediction on human faces with mole
|
[
{
"docid": "0999a01e947019409c75150f85058728",
"text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.",
"title": ""
}
] |
[
{
"docid": "ee5c8e8c4f2964510604d1ef4a452372",
"text": "Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an alternative approach to study dynamic consumer demand, based on Inverse Reinforcement Learning (IRL). We develop a version of the Maximum Entropy IRL that leads to a highly tractable model formulation that amounts to low-dimensional convex optimization in the search for optimal model parameters. Using simulations of consumer demand, we show that observational noise for identical customers can be easily confused with an apparent consumer heterogeneity.",
"title": ""
},
{
"docid": "66d35e0f9d725475d9d1e61a724cf5ea",
"text": "As data-driven methods are becoming pervasive in a wide variety of disciplines, there is an urgent need to develop scalable and sustainable tools to simplify the process of data science, to make it easier for the users to keep track of the analyses being performed and datasets being generated, and to enable the users to understand and analyze the workflows. In this paper, we describe our vision of a unified provenance and metadata management system to support lifecycle management of complex collaborative data science workflows. We argue that the information about the analysis processes and data artifacts can, and should be, captured in a semi-passive manner; and we show that querying and analyzing this information can not only simplify bookkeeping and debugging tasks but also enable a rich new set of capabilities like identifying flaws in the data science process itself. It can also significantly reduce the user time spent in fixing post-deployment problems through automated analysis and monitoring. We have implemented a prototype system, PROVDB, on top of git and Neo4j, and we describe its key features and capabilities.",
"title": ""
},
{
"docid": "281c64b492a1aff7707dbbb5128799c8",
"text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.",
"title": ""
},
{
"docid": "88a15c0efdfeba3e791ea88862aee0c3",
"text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.",
"title": ""
},
{
"docid": "55772e55adb83d4fd383ddebcf564a71",
"text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.",
"title": ""
},
{
"docid": "52a6319c28c6c889101d9b2b6d4a76d3",
"text": "A method is developed for imputing missing values when the probability of response depends upon the variable being imputed. The missing data problem is viewed as one of parameter estimation in a regression model with stochastic ensoring of the dependent variable. The prediction approach to imputation is used to solve this estimation problem. Wages and salaries are imputed to nonrespondents in the Current Population Survey and the results are compared to the nonrespondents' IRS wage and salary data. The stochastic ensoring approach gives improved results relative to a prediction approach that ignores the response mechanism.",
"title": ""
},
{
"docid": "fc3d4b4ac0d13b34aeadf5806013689d",
"text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.",
"title": ""
},
{
"docid": "b36058bcfcb5f5f4084fe131c42b13d9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "4ee84cfdef31d4814837ad2811e59cd4",
"text": "In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.",
"title": ""
},
{
"docid": "7b7ae905f8695dcac4ed1231c76ced69",
"text": "In this paper the intelligent control of full automatic car wash using a programmable logic controller (PLC) has been investigated and designed to do all steps of carwashing. The Intelligent control of full automatic carwash has the ability to identify and profile the geometrical dimensions of the vehicle chassis. Vehicle dimension identification is an important point in this control system to adjust the washing brushes position and time duration. The study also tries to design a control set for simulating and building the automatic carwash. The main purpose of the simulation is to develop criteria for designing and building this type of carwash in actual size to overcome challenges of automation. The results of this research indicate that the proposed method in process control not only increases productivity, speed, accuracy and safety but also reduce the time and cost of washing based on dynamic model of the vehicle. A laboratory prototype based on an advanced intelligent control has been built to study the validity of the design and simulation which it’s appropriate performance confirms the validity of this study. Keywords—Automatic Carwash, Dimension, PLC.",
"title": ""
},
{
"docid": "c6485365e8ce550ea8c507aa963a00c2",
"text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N",
"title": ""
},
{
"docid": "c219b930c571a7429dc5c4edc92022f2",
"text": "Manually labeling documents for training a text classifier is expensive and time-consuming. Moreover, a classifier trained on labeled documents may suffer from overfitting and adaptability problems. Dataless text classification (DLTC) has been proposed as a solution to these problems, since it does not require labeled documents. Previous research in DLTC has used explicit semantic analysis of Wikipedia content to measure semantic distance between documents, which is in turn used to classify test documents based on nearest neighbours. The semantic-based DLTC method has a major drawback in that it relies on a large-scale, finely-compiled semantic knowledge base, which is difficult to obtain in many scenarios. In this paper we propose a novel kind of model, descriptive LDA (DescLDA), which performs DLTC with only category description words and unlabeled documents. In DescLDA, the LDA model is assembled with a describing device to infer Dirichlet priors from prior descriptive documents created with category description words. The Dirichlet priors are then used by LDA to induce category-aware latent topics from unlabeled documents. Experimental results with the 20Newsgroups and RCV1 datasets show that: (1) our DLTC method is more effective than the semantic-based DLTC baseline method; and (2) the accuracy of our DLTC method is very close to state-of-the-art supervised text classification methods. As neither external knowledge resources nor labeled documents are required, our DLTC method is applicable to a wider range of scenarios.",
"title": ""
},
{
"docid": "e244cbd076ea62b4d720378c2adf4438",
"text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.",
"title": ""
},
{
"docid": "1f44c8d792b961649903eb1ab2612f3c",
"text": "Teeth segmentation is an important step in human identification and Content Based Image Retrieval (CBIR) systems. This paper proposes a new approach for teeth segmentation using morphological operations and watershed algorithm. In Cone Beam Computer Tomography (CBCT) and Multi Slice Computer Tomography (MSCT) each tooth is an elliptic shape region that cannot be separated only by considering their pixels' intensity values. For segmenting a tooth from the image, some enhancement is necessary. We use morphological operators such as image filling and image opening to enhance the image. In the proposed algorithm, a Maximum Intensity Projection (MIP) mask is used to separate teeth regions from black and bony areas. Then each tooth is separated using the watershed algorithm. Anatomical constraints are used to overcome the over segmentation problem in watershed method. The results show a high accuracy for the proposed algorithm in segmenting teeth. Proposed method decreases time consuming by considering only one image of CBCT and MSCT for segmenting teeth instead of using all slices.",
"title": ""
},
{
"docid": "f005ebceeac067ffae197fee603ed8c7",
"text": "The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg",
"title": ""
},
{
"docid": "bde9e26746ddcc6e53f442a0e400a57e",
"text": "Aljebreen, Mohammed, \"Implementing a dynamic scaling of web applications in a virtualized cloud computing environment\" (2013). Abstract Cloud computing is becoming more essential day by day. The allure of the cloud is the significant value and benefits that people gain from it, such as reduced costs, increased storage, flexibility, and more mobility. Flexibility is one of the major benefits that cloud computing can provide in terms of scaling up and down the infrastructure of a network. Once traffic has increased on one server within the network, a load balancer instance will route incoming requests to a healthy instance, which is less busy and less burdened. When the full complement of instances cannot handle any more requests, past research has been done by Chieu et. al. that presented a scaling algorithm to address a dynamic scalability of web applications on a virtualized cloud computing environment based on relevant indicators that can increase or decrease servers, as needed. In this project, I implemented the proposed algorithm, but based on CPU Utilization threshold. In addition, two tests were run exploring the capabilities of different metrics when faced with ideal or challenging conditions. The results did find a superior metric that was able to perform successfully under both tests. 3 Dedication I lovingly dedicate this thesis to my gracious and devoted mother for her unwavering love and for always believing in me. 4 Acknowledgments This thesis would not have been possible without the support of many people. My wish is to express humble gratitude to the committee chair, Prof. Sharon Mason, who was perpetually generous in offering her invaluable assistance, support, and guidance. Deepest gratitude is also due to the members of my supervisory committee, Prof. Lawrence Hill and Prof. Jim Leone, without whose knowledge and direction this study would not have been successful. Special thanks also to Prof. Charles Border for his financial support of this thesis and priceless assistance. Profound gratitude to my mother, Moneerah, who has been there from the very beginning, for her support and endless love. I would also like to convey thanks to my wife for her patient and unending encouragement and support throughout the duration of my studies; without my wife's encouragement, I would not have completed this degree. I wish to express my gratitude to my beloved sister and brothers for their kind understanding throughout my studies. Special thanks to my friend, Mohammed Almathami, for his …",
"title": ""
},
{
"docid": "c85bd1c2ffb6b53bfeec1ec69f871360",
"text": "In this paper, we present a new design of a compact power divider based on the modification of the conventional Wilkinson power divider. In this new configuration, length reduction of the high-impedance arms is achieved through capacitive loading using open stubs. Radial configuration was adopted for bandwidth enhancement. Additionally, by insertion of the complex isolation network between the high-impedance transmission lines at an arbitrary phase angle other than 90 degrees, both electrical and physical isolation were achieved. Design equations as well as the synthesis procedure of the isolation network are demonstrated using an example centred at 1 GHz. The measurement results revealed a reduction of 60% in electrical length compared to the conventional Wilkinson power divider with a total length of only 30 degrees at the centre frequency of operation.",
"title": ""
},
{
"docid": "15f6b6be4eec813fb08cb3dd8b9c97f2",
"text": "ACKNOWLEDGEMENTS First, I would like to thank my supervisor Professor H. Levent Akın for his guidance. This thesis would not have been possible without his encouragement and enthusiastic support. I would also like to thank all the staff at the Artificial Intelligence Laboratory for their encouragement throughout the year. Their success in RoboCup is always a good motivation. Sharing their precious ideas during the weekly seminars have always guided me to the right direction. Finally I am deeply grateful to my family and to my wife Derya. They always give me endless love and support, which has helped me to overcome the various challenges along the way. Thank you for your patience... The field of Intelligent Transport Systems (ITS) is improving rapidly in the world. Ultimate aim of such systems is to realize fully autonomous vehicle. The researches in the field offer the potential for significant enhancements in safety and operational efficiency. Lane tracking is an important topic in autonomous navigation because the navigable region usually stands between the lanes, especially in urban environments. Several approaches have been proposed, but Hough transform seems to be the dominant among all. A robust lane tracking method is also required for reducing the effect of the noise and achieving the required processing time. In this study, we present a new lane tracking method which uses a partitioning technique for obtaining Multiresolution Hough Transform (MHT) of the acquired vision data. After the detection process, a Hidden Markov Model (HMM) based method is proposed for tracking the detected lanes. Traffic signs are important instruments to indicate the rules on roads. This makes them an essential part of the ITS researches. It is clear that leaving traffic signs out of concern will cause serious consequences. Although the car manufacturers have started to deploy intelligent sign detection systems on their latest models, the road conditions and variations of actual signs on the roads require much more robust and fast detection and tracking methods. Localization of such systems is also necessary because traffic signs differ slightly between countries. This study also presents a fast and robust sign detection and tracking method based on geometric transformation and genetic algorithms (GA). Detection is done by a genetic algorithm (GA) approach supported by a radial symmetry check so that false alerts are considerably reduced. Classification v is achieved by a combination of SURF features with NN or SVM classifiers. A heuristic …",
"title": ""
},
{
"docid": "1733a6f167e7e13bc816b7fc546e19e3",
"text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.",
"title": ""
},
{
"docid": "27316b23e7a7cd163abd40f804caf61b",
"text": "Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.",
"title": ""
}
] |
scidocsrr
|
8b595113ad1fab654a06f9bb218b5da4
|
SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks
|
[
{
"docid": "89f157fd5c42ba827b7d613f80770992",
"text": "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions.",
"title": ""
}
] |
[
{
"docid": "ab6d4dbaf92c142dfce0c8133e7ae669",
"text": "This paper presents a high-performance substrate-integrated-waveguide RF microelectromechanical systems (MEMS) tunable filter for 1.2-1.6-GHz frequency range. The proposed filter is developed using packaged RF MEMS switches and utilizes a two-layer structure that effectively isolates the cavity filter from the RF MEMS switch circuitry. The two-pole filter implemented on RT/Duroid 6010LM exhibits an insertion loss of 2.2-4.1 dB and a return loss better than 15 dB for all tuning states. The relative bandwidth of the filter is 3.7 ± 0.5% over the tuning range. The measured Qu of the filter is 93-132 over the tuning range, which is the best reported Q in filters using off-the-shelf RF MEMS switches on conventional printed circuit board substrates. In addition, an upper stopband rejection better than 28 dB is obtained up to 4.0 GHz by employing low-pass filters at the bandpass filter terminals at the cost of 0.7-1.0-dB increase in the insertion loss.",
"title": ""
},
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "1b4e3dcd8f94c3f6e3451ced417655e3",
"text": "The serverless paradigm has been rapidly adopted by developers of cloud-native applications, mainly because it relieves them from the burden of provisioning, scaling and operating the underlying infrastructure. In this paper, we propose a novel computing paradigm - Deviceless Edge Computing that extends the serverless paradigm to the edge of the network, enabling IoT and Edge devices to be seamlessly integrated as application execution infrastructure. We also discuss open challenges to realize Deviceless Edge Computing, based on our experience in prototyping a deviceless platform.",
"title": ""
},
{
"docid": "ca307225e8ab0e7876446cf17d659fc8",
"text": "This paper presents a novel class of substrate integrated waveguide (SIW) filters, based on periodic perforations of the dielectric layer. The perforations allow to reduce the local effective dielectric permittivity, thus creating waveguide sections below cutoff. This effect is exploited to implement immittance inverters through analytical formulas, providing simple design rules for the direct synthesis of the filters. The proposed solution is demonstrated through the design and testing of several filters with different topologies (including half-mode SIW and folded structures). The comparison with classical iris-type SIW filters demonstrates that the proposed filters exhibit better performance in terms of sensitivity to fabrication inaccuracies and rejection bandwidth, at the cost of a slightly larger size.",
"title": ""
},
{
"docid": "12866e003093bc7d89d751697f2be93c",
"text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.",
"title": ""
},
{
"docid": "d94f4df63ac621d9a8dec1c22b720abb",
"text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.",
"title": ""
},
{
"docid": "36b6c222587948357c275155b085ae6e",
"text": "Deep Neural Networks (DNNs) require very large amounts of computation, and many different algorithms have been proposed to implement their most expensive layers, each of which has a large number of variants with different trade-offs of parallelism, locality, memory footprint, and execution time. In addition, specific algorithms operate much more efficiently on specialized data layouts. \n We state the problem of optimal primitive selection in the presence of data layout transformations, and show that it is NP-hard by demonstrating an embedding in the Partitioned Boolean Quadratic Assignment problem (PBQP). We propose an analytic solution via a PBQP solver, and evaluate our approach experimentally by optimizing several popular DNNs using a library of more than 70 DNN primitives, on an embedded platform and a general purpose platform. We show experimentally that significant gains are possible versus the state of the art vendor libraries by using a principled analytic solution to the problem of primitive selection in the presence of data layout transformations.",
"title": ""
},
{
"docid": "d5d55ca4eaa5c4ee129ddfcd7b5ddf87",
"text": "Person re-identification (re-id) aims to match pedestrians observed by disjoint camera views. It attracts increasing attention in computer vision due to its importance to surveillance system. To combat the major challenge of cross-view visual variations, deep embedding approaches are proposed by learning a compact feature space from images such that the Euclidean distances correspond to their cross-view similarity metric. However, the global Euclidean distance cannot faithfully characterize the ideal similarity in a complex visual feature space because features of pedestrian images exhibit unknown distributions due to large variations in poses, illumination and occlusion. Moreover, intra-personal training samples within a local range are robust to guide deep embedding against uncontrolled variations, which however, cannot be captured by a global Euclidean distance. In this paper, we study the problem of person re-id by proposing a novel sampling to mine suitable positives (i.e., intra-class) within a local range to improve the deep embedding in the context of large intra-class variations. Our method is capable of learning a deep similarity metric adaptive to local sample structure by minimizing each sample’s local distances while propagating through the relationship between samples to attain the whole intra-class minimization. To this end, a novel objective function is proposed to jointly optimize ∗Corresponding author. Email addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), junbin.gao@sydney.edu.au (Junbin Gao), xueli@itee.uq.edu.au (Xue Li) Preprint submitted to Elsevier 8·9·2017 ar X iv :1 70 6. 03 16 0v 2 [ cs .C V ] 7 S ep 2 01 7 similarity metric learning, local positive mining and robust deep embedding. This yields local discriminations by selecting local-ranged positive samples, and the learned features are robust to dramatic intra-class variations. Experiments on benchmarks show state-of-the-art results achieved by our method.",
"title": ""
},
{
"docid": "d9c189cbf2695fa9ac032b8c6210a070",
"text": "The increasing of aspect ratio in DRAM capacitors causes structural instabilities and device failures as the generation evolves. Conventionally, two-dimensional and three-dimensional models are used to solve these problems by optimizing thin film thickness, material properties and structure parameters; however, it is not enough to analyze the latest failures associated with large-scale DRAM capacitor arrays. Therefore, beam-shell model based on classical beam and shell theories is developed in this study to simulate diverse failures. It enables us to solve multiple failure modes concurrently such as supporter crack, capacitor bending, and storage-poly fracture.",
"title": ""
},
{
"docid": "43a57d9ad5a4ea7cb446adf8cb91f640",
"text": "It is widely acknowledged that the value of a house is the mixture of a large number of characteristics. House price prediction thus presents a unique set of challenges in practice. While a large body of works are dedicated to this task, their performance and applications have been limited by the shortage of long time span of transaction data, the absence of real-world settings and the insufficiency of housing features. To this end, a time-aware latent hierarchical model is introduced to capture underlying spatiotemporal interactions behind the evolution of house prices. The hierarchical perspective obviates the need for historical transaction data of exactly same houses when temporal effects are considered. The proposed framework is examined on a large-scale dataset of the property transaction in Beijing. The whole experimental procedure strictly complies with the real-world scenario. The empirical evaluation results demonstrate the outperformance of our approach over alternative competitive methods.",
"title": ""
},
{
"docid": "9e5144241a78ad34045d23d137c84596",
"text": "The conventional approach to sampling signals or images follows the celebrated Shannon sampling theorem: the sampling rate must be at least twice the maximum frequency present in the signal (the so-called Nyquist rate). In fact, this principle underlies nearly all signal acquisition protocols used in consumer audio and visual electronics, medical imaging devices, radio receivers, and so on. In the field of data conversion, for example, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation: the signal is uniformly sampled at or above the Nyquist rate. This paper surveys the theory of compressive sampling also known as compressed sensing, or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. The CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, compressive sampling relies on two tenets, namely, sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. • Sparsity expresses the idea that the “information rate” of a continuous time signal may be much smaller than suggested by its bandwidth, or that a discrete-time signal depends on a number of degrees of freedom which is comparably much smaller than its (finite) length. More precisely, compressive sampling exploits the fact that many natural signals are sparse or compressible in the sense that they have concise representations when expressed in the proper basis Ψ. • Incoherence extends the duality between time and frequency and expresses the idea that objects having a sparse representation in Ψ must be spread out in the domain in which they are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency domain. Put differently, incoherence says that unlike the signal of interest, the sampling/sensing waveforms have an extremely dense representation in Ψ.",
"title": ""
},
{
"docid": "46cc515d0d41e0027cc975f37d9e1f7b",
"text": "A distributed data-stream architecture finds application in sensor networks for monitoring environment and activities. In such a network, large numbers of sensors deliver continuous data to a central server. The rate at which the data is sampled at each sensor affects the communication resource and the computational load at the central server. In this paper, we propose a novel adaptive sampling technique where the sampling rate at each sensor adapts to the streaming-data characteristics. Our approach employs a Kalman-Filter (KF)-based estimation technique wherein the sensor can use the KF estimation error to adaptively adjust its sampling rate within a given range, autonomously. When the desired sampling rate violates the range, a new sampling rate is requested from the server. The server allocates new sampling rates under the constraint of available resources such that KF estimation error over all the active streaming sensors is minimized. Through empirical studies, we demonstrate the flexibility and effectiveness of our model.",
"title": ""
},
{
"docid": "6c41dc25f8d63da094732fd54a8497ff",
"text": "Robotics systems are complex, often consisted of basic services including SLAM for localization and mapping, Convolution Neural Networks for scene understanding, and Speech Recognition for user interaction, etc. Meanwhile, robots are mobile and usually have tight energy constraints, integrating these services onto an embedded platform with around 10 W of power consumption is critical to the proliferation of mobile robots. In this paper, we present a case study on integrating real-time localization, vision, and speech recognition services on a mobile SoC, Nvidia Jetson TX1, within about 10 W of power envelope. In addition, we explore whether offloading some of the services to cloud platform can lead to further energy efficiency while meeting the real-time requirements.",
"title": ""
},
{
"docid": "e6021e334415240dd813fa2baae36773",
"text": "In this study, we propose a discriminative training algorithm to jointly minimize mispronunciation detection errors (i.e., false rejections and false acceptances) and diagnosis errors (i.e., correctly pinpointing mispronunciations but incorrectly stating how they are wrong). An optimization procedure, similar to Minimum Word Error (MWE) discriminative training, is developed to refine the ML-trained HMMs. The errors to be minimized are obtained by comparing transcribed training utterances (including mispronunciations) with Extended Recognition Networks [3] which contain both canonical pronunciations and explicitly modeled mispronunciations. The ERN is compiled by handcrafted rules, or data-driven rules. Several conclusions can be drawn from the experiments: (1) data-driven rules are more effective than hand-crafted ones in capturing mispronunciations; (2) compared with the ML training baseline, discriminative training can reduce false rejections and diagnostic errors, though false acceptances increase slightly due to a small number of false-acceptance samples in the training set.",
"title": ""
},
{
"docid": "609651c6c87b634814a81f38d9bfbc67",
"text": "Resistance training (RT) has shown the most promise in reducing/reversing effects of sarcopenia, although the optimum regime specific for older adults remains unclear. We hypothesized myofiber hypertrophy resulting from frequent (3 days/wk, 16 wk) RT would be impaired in older (O; 60-75 yr; 12 women, 13 men), sarcopenic adults compared with young (Y; 20-35 yr; 11 women, 13 men) due to slowed repair/regeneration processes. Myofiber-type distribution and cross-sectional area (CSA) were determined at 0 and 16 wk. Transcript and protein levels of myogenic regulatory factors (MRFs) were assessed as markers of regeneration at 0 and 24 h postexercise, and after 16 wk. Only Y increased type I CSA 18% (P < 0.001). O showed smaller type IIa (-16%) and type IIx (-24%) myofibers before training (P < 0.05), with differences most notable in women. Both age groups increased type IIa (O, 16%; Y, 25%) and mean type II (O, 23%; Y, 32%) size (P < 0.05). Growth was generally most favorable in young men. Percent change scores on fiber size revealed an age x gender interaction for type I fibers (P < 0.05) as growth among Y (25%) exceeded that of O (4%) men. Myogenin and myogenic differentiation factor D (MyoD) mRNAs increased (P < 0.05) in Y and O, whereas myogenic factor (myf)-5 mRNA increased in Y only (P < 0.05). Myf-6 protein increased (P < 0.05) in both Y and O. The results generally support our hypothesis as 3 days/wk training led to more robust hypertrophy in Y vs. O, particularly among men. However, this differential hypertrophy adaptation was not explained by age variation in MRF expression.",
"title": ""
},
{
"docid": "d3107e466c5c8e84b578d0563f5c5644",
"text": "The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content.",
"title": ""
},
{
"docid": "4d959fc84483618a1ea6648b16d2e4d2",
"text": "In this themed issue of the Journal of Sport & Exercise Psychology, we bring together an eclectic mix of papers focusing on how expert performers learn the skills needed to compete at the highest level in sport. In the preface, we highlight the value of adopting the expert performance approach as a systematic framework for the evaluation and development of expertise and expert performance in sport. We then place each of the empirical papers published in this issue into context and briefly outline their unique contributions to knowledge in this area. Finally, we highlight several potential avenues for future research in the hope of encouraging others to scientifically study how experts acquire the mechanisms mediating superior performance in sport and how coaches can draw on this knowledge to guide their athletes toward the most effective training activities.",
"title": ""
},
{
"docid": "19b5ec2f1347b458bccc79eb18b5bc39",
"text": "Objective: Cyber bullying is a combination of the word cyber and bullying where cyber basically means the Internet or on-line. In this case, cyber bullying will focus on getting in action with bullying by using the Internet or modern technologies such as on-line chats, online media and short messaging texts through social media. The current review aims to compile and summarize the results of relevant publications related to “cyber bullying.\" The review also includes discussing on relevant variables related to cyber bullying. Methods: Information from relevant publications addresses the demographics, prevalence, differences between cyber bullying and traditional bullying, bullying motivation, avenues to overcome it, preventions, coping mechanisms in relation to “cyber bullying” were retrieved and summarized. Results: The prevalence of cyber bullying ranges from 30% 55% and the contributing risk factors include positive association with perpetration, non-supportive school environment, and Internet risky behaviors. Both males and females have been equal weigh on being perpetrators and victims. The older groups with more technology exposures are more prone to be exposed to cyber bullying. With respect to individual components of bullying, repetition is less evident in cyber bullying and power imbalance is not measured by physicality but in terms of popularity and technical knowledge of the perpetrator. Conclusion: Due to the limited efforts centralized on the intervention, future researchers should focus on testing the efficacy of possible interventional programs and the effects of different roles in the intervention in order to curb the problem and prevent more deleterious effects of cyber bullying. ASEAN Journal of Psychiatry, Vol. 17 (1): January – June 2016: XX XX.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
},
{
"docid": "70bed43cdfd50586e803bf1a9c8b3c0a",
"text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.",
"title": ""
}
] |
scidocsrr
|
b5020ade07db21e3632fd1853a4be31a
|
Multi-dimensional trade-off considerations of the 750V micro pattern trench IGBT for electric drive train applications
|
[
{
"docid": "8b6758fdd357384c2032afd405bf2c6a",
"text": "A novel 1200 V Insulated Gate Bipolar Transistor (IGBT) for high-speed switching that combines Shorted Dummy-cell (SD) to control carrier extraction at the emitter side and P/P- collector to reduce hole injection from the backside is proposed. The SD-IGBT with P/P- collector has achieved 37 % reduction of turn-off power dissipation compared with a conventional Floating Dummy-cell (FD) IGBT. The SD-IGBT with P/P- collector also has high turn-off current capability because it extracts carriers uniformly from the dummy-cell. These results show the proposed device has a ideal carrier profile for high-speed switching.",
"title": ""
}
] |
[
{
"docid": "b150c18332645bf46e7f2e8ababbcfc4",
"text": "Wilkinson Power Dividers/Combiners The in-phase power combiners and dividers are important components of the RF and microwave transmitters when it is necessary to deliver a high level of the output power to antenna, especially in phased-array systems. In this case, it is also required to provide a high degree of isolation between output ports over some frequency range for identical in-phase signals with equal amplitudes. Figure 19(a) shows a planar structure of the basic parallel beam N-way divider/combiner, which provides a combination of powers from the N signal sources. Here, the input impedance of the N transmission lines (connected in parallel) with the characteristic impedance of Z0 each is equal to Z0/N. Consequently, an additional quarterwave transmission line with the characteristic impedance",
"title": ""
},
{
"docid": "ce3e480e50ffc7a79c3dbc71b07ec9f7",
"text": "A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of information or cognitive states represented in the brain, based on distributed patterns of neural activity. In the current investigation, we propose a new approach for representation of neural data for pattern analysis, namely a Mesh Learning Model. In this approach, at each time instant, a star mesh is formed around each voxel, such that the voxel corresponding to the center node is surrounded by its p-nearest neighbors. The arc weights of each mesh are estimated from the voxel intensity values by least squares method. The estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs), are then used to train a classifier, such as Neural Networks, k-Nearest Neighbor, Naïve Bayes and Support Vector Machines. The proposed Mesh Model was tested on neuroimaging data acquired via functional magnetic resonance imaging (fMRI) during a recognition memory experiment using categorized word lists, employing a previously established experimental paradigm (Öztekin & Badre, 2011). Results suggest that the proposed Mesh Learning approach can provide an effective algorithm for pattern analysis of brain activity during cognitive processing.",
"title": ""
},
{
"docid": "fb162c94248297f35825ff1022ad2c59",
"text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1e8e4364427d18406594af9ad3a73a28",
"text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.",
"title": ""
},
{
"docid": "2547e6e8138c49b76062e241391dfc1d",
"text": "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation.",
"title": ""
},
{
"docid": "b49cc6cc439e153650c858f65f97b3d7",
"text": "The evolution of mobile malware poses a serious threat to smartphone security. Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as Drebin and DroidAPIMiner) ineffective. In this paper, we explore the feasibility of constructing crafted malware samples; examine how machine-learning classifiers can be misled under three different threat models; then conclude that injecting carefully crafted data into training data can significantly reduce detection accuracy. To tackle the problem, we propose KuafuDet, a two-phase learning enhancing approach that learns mobile malware by adversarial detection. KuafuDet includes an offline training phase that selects and extracts features from the training set, and an online detection phase that utilizes the classifier trained by the first phase. To further address the adversarial environment, these two phases are intertwined through a self-adaptive learning scheme, wherein an automated camouflage detector is introduced to filter the suspicious false negatives and feed them back into the training phase. We finally show KuafuDet significantly reduces false negatives and boosts the detection accuracy by at least 15%. Experiments on more than 250,000 mobile applications demonstrate that KuafuDet is scalable and can be highly effective as a standalone system.",
"title": ""
},
{
"docid": "abbdc23d1c8833abda16f477dddb45fd",
"text": "Recently introduced generative adversarial networks (GANs) have been shown numerous promising results to generate realistic samples. In the last couple of years, it has been studied to control features in synthetic samples generated by the GAN. Auxiliary classifier GAN (ACGAN), a conventional method to generate conditional samples, employs a classification layer in discriminator to solve the problem. However, in this paper, we demonstrate that the auxiliary classifier can hardly provide good guidance for training of the generator, where the classifier suffers from overfitting. Since the generator learns from classification loss, such a problem has a chance to hinder the training. To overcome this limitation, here, we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from the discriminator, the classifier can be trained with data augmentation technique, which can support to make a fine classifier. Evaluated with the CIFAR-10 dataset, ControlGAN outperforms AC-WGAN-GP which is an improved version of the ACGAN, where Inception score of the ControlGAN is 8.61 ± 0.10. Furthermore, we demonstrate that the ControlGAN can generate intermediate features and opposite features for interpolated input and extrapolated input labels that are not used in the training process. It implies that the ControlGAN can significantly contribute to the variety of generated samples.",
"title": ""
},
{
"docid": "591327371e942690a88265233fefc548",
"text": "The comb fingers of high aspect ratio structures fabricated by micromachining technology are usually not parallel. Effects of the inclination of the fingers and edge effect on the capacitance, driving electrostatic force, and electrostatic spring constant are studied. The complex nonlinear air damping in the 3-D resonators is also determined accurately. The governing equations are presented to describe the complex dynamic problem by taking both linear and nonlinear mechanical spring stiffness constants into account. The dynamic responses of the micro-resonator driven by electrostatic combs are investigated using the multiscale method. Stability analysis is presented using the maximum Lyapunov index map, and effects of vacuum pressure on the frequency tuning and stability are also discussed. The comparisons show that the numerical results agree well with the experimental data reported in the literature, and it verified the validity of the presented dynamic model. The results also demonstrate that the inclination of the fingers causes the resonance frequency to increase and the electrostatic spring to harden under applied dc voltage. Therefore, it can provide an effective approach to balance the traditional resonance frequency decreasing and stiffness softening from driving electrostatic force. The inclination of the fingers can be helpful for strengthening the stability of the MEMS resonators, and avoiding the occurrence of pull-in.",
"title": ""
},
{
"docid": "48d1f79cd3b887cced3d3a2913a25db3",
"text": "Children's use of electronic media, including Internet and video gaming, has increased dramatically to an average in the general population of roughly 3 h per day. Some children cannot control their Internet use leading to increasing research on \"internet addiction.\" The objective of this article is to review the research on ADHD as a risk factor for Internet addiction and gaming, its complications, and what research and methodological questions remain to be addressed. The literature search was done in PubMed and Psychinfo, as well as by hand. Previous research has demonstrated rates of Internet addiction as high as 25% in the population and that it is addiction more than time of use that is best correlated with psychopathology. Various studies confirm that psychiatric disorders, and ADHD in particular, are associated with overuse, with severity of ADHD specifically correlated with the amount of use. ADHD children may be vulnerable since these games operate in brief segments that are not attention demanding. In addition, they offer immediate rewards with a strong incentive to increase the reward by trying the next level. The time spent on these games may also exacerbate ADHD symptoms, if not directly then through the loss of time spent on more developmentally challenging tasks. While this is a major issue for many parents, there is no empirical research on effective treatment. Internet and off-line gaming overuse and addiction are serious concerns for ADHD youth. Research is limited by the lack of measures for youth or parents, studies of children at risk, and studies of impact and treatment.",
"title": ""
},
{
"docid": "ae5497a11458851438d6cc86daec189a",
"text": "Automated activity recognition enables a wide variety of applications related to child and elderly care, disease diagnosis and treatment, personal health or sports training, for which it is key to seamlessly determine and log the user’s motion. This work focuses on exploring the use of smartphones to perform activity recognition without interfering in the user’s lifestyle. Thus, we study how to build an activity recognition system to be continuously executed in a mobile device in background mode. The system relies on device’s sensing, processing and storing capabilities to estimate significant movements/postures (walking at different paces—slow, normal, rush, running, sitting, standing). In order to evaluate the combinations of sensors, features and algorithms, an activity dataset of 16 individuals has been gathered. The performance of a set of lightweight classifiers (Naïve Bayes, Decision Table and Decision Tree) working on different sensor data has been fully evaluated and optimized in terms of accuracy, computational cost and memory fingerprint. Results have pointed out that a priori information on the relative position of the mobile device with respect to the user’s body enhances the estimation accuracy. Results show that computational low-cost Decision Tables using the best set of features among mean and variance and considering all the sensors (acceleration, gravity, linear acceleration, magnetometer, gyroscope) may be enough to get an activity estimation accuracy of around 88 % (78 % is the accuracy of the Naïve Bayes algorithm with the same characteristics used as a baseline). To demonstrate its applicability, the activity recognition system has been used to enable a mobile application to promote active lifestyles.",
"title": ""
},
{
"docid": "4b544bb34c55e663cdc5f0a05201e595",
"text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.",
"title": ""
},
{
"docid": "af2ef011b7636d12a83003e32755f840",
"text": "This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical first-order analysis of Young and Daly in the presence of a fault prediction system, characterized by its recall and its precision. In this framework, we provide an optimal algorithm to decide when to take predictions into account, and we derive the optimal value of the checkpointing period. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. Key-words: Fault-tolerance, checkpointing, prediction, algorithms, model, exascale ∗ LIP, École Normale Supérieure de Lyon, France † University of Tennessee Knoxville, USA ‡ Institut Universitaire de France § INRIA ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Étude de l’impact de la prédiction de fautes sur les stratégies de protocoles de checkpoint Résumé : Ce travail considère l’impact des techniques de prédiction de fautes sur les stratégies de protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Nous étendons l’analyse classique de Young en présence d’un système de prédiction de fautes, qui est caractérisé par son rappel (taux de pannes prévues sur nombre total de pannes) et par sa précision (taux de vraies pannes parmi le nombre total de pannes annoncées). Dans ce travail, nous avons pu obtenir la valeur optimale de la période de checkpoint (minimisant ainsi le gaspillage de l’utilisation des ressources dû au coût de prise de ces points de sauvegarde) dans différents scénarios. Ce papier pose les fondations théoriques pour de futures expériences et une validation du modèle. Mots-clés : Tolérance aux pannes, checkpoint, prédiction, algorithmes, modèle, exascale ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Checkpointing algorithms and fault prediction 3",
"title": ""
},
{
"docid": "ba4df2305d4f292a6ee0f033e58d7a16",
"text": "Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.",
"title": ""
},
{
"docid": "16a384727d6a323437a0b6ed3cdcc230",
"text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.",
"title": ""
},
{
"docid": "9327a13308cd713bcfb3b4717eaafef0",
"text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.",
"title": ""
},
{
"docid": "79f10f0b7da7710ce68d9df6212579b6",
"text": "The Internet is probably the most successful distributed computing system ever. However, our capabilities for data querying and manipulation on the internet are primordial at best. The user expectations are enhancing over the period of time along with increased amount of operational data past few decades. The data-user expects more deep, exact, and detailed results. Result retrieval for the user query is always relative o the pattern of data storage and index. In Information retrieval systems, tokenization is an integrals part whose prime objective is to identifying the token and their count. In this paper, we have proposed an effective tokenization approach which is based on training vector and result shows that efficiency/ effectiveness of proposed algorithm. Tokenization on documents helps to satisfy user’s information need more precisely and reduced search sharply, is believed to be a part of information retrieval. Pre-processing of input document is an integral part of Tokenization, which involves preprocessing of documents and generates its respective tokens which is the basis of these tokens probabilistic IR generate its scoring and gives reduced search space. The comparative analysis is based on the two parameters; Number of Token generated, Pre-processing time.",
"title": ""
},
{
"docid": "de408de1915d43c4db35702b403d0602",
"text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.",
"title": ""
},
{
"docid": "bb12f0a1ecace2493b83c664bdfb7d9b",
"text": "Information retrieval is concerned with representing content in a form that can be easily accessed by users with information needs [61, 65]. A definition at this level of generality applies equally well to any index-based retrieval system or database application; so let us focus the topic a little more carefully. Information retrieval, as a field, works primarily with highly unstructured content, such as text documents written in natural language; it deals with information needs that are generally not formulated according to precise specifications; and its criteria for success are based in large part on the demands of a diverse set of human users. Our purpose in this short article is not to provide a survey of the field of information retrieval — for this we refer the reader to texts and surveys such as [25, 29, 51, 60, 61, 62, 63, 65, 70]. Rather, we wish to discuss some specific applications of techniques from linear algebra to information retrieval and hypertext analysis. In particular, we focus on spectral methods — the use of eigenvectors and singular vectors of matrices — and their role in these areas. After briefly introducing the use of vector-space models in information retrieval [52, 65], we describe the application of the singular value decomposition to dimensionreduction, through the Latent Semantic Indexing technique [14]. We contrast this with several other approaches to clustering and dimension-reduction based on vector-space models.",
"title": ""
},
{
"docid": "17813a603f0c56c95c96f5b2e0229026",
"text": "Geographic ranges are estimated for brachiopod and bivalve species during the late Middle (mid-Givetian) to the middle Late (terminal Frasnian) Devonian to investigate range changes during the time leading up to and including the Late Devonian biodiversity crisis. Species ranges were predicted using GARP (Genetic Algorithm using Rule-set Prediction), a modeling program developed to predict fundamental niches of modern species. This method was applied to fossil species to examine changing ranges during a critical period of Earth’s history. Comparisons of GARP species distribution predictions with historical understanding of species occurrences indicate that GARP models predict accurately the presence of common species in some depositional settings. In addition, comparison of GARP distribution predictions with species-range reconstructions from geographic information systems (GIS) analysis suggests that GARP modeling has the potential to predict species ranges more completely and tailor ranges more specifically to environmental parameters than GIS methods alone. Thus, GARP modeling is a potentially useful tool for predicting fossil species ranges and can be used to address a wide array of palaeontological problems. The use of GARP models allows a statistical examination of the relationship of geographic range size with species survival during the Late Devonian. Large geographic range was statistically associated with species survivorship across the crisis interval for species examined in the linguiformis Zone but not for species modeled in the preceding Lower varcus or punctata zones. The enhanced survival benefit of having a large geographic range, therefore, appears to be restricted to the biodiversity crisis interval.",
"title": ""
},
{
"docid": "95974e6e910799e478a1d0c9cda86bcd",
"text": "Recently, there has been an explosion of cloud-based services that enable developers to include a spectrum of recognition services, such as emotion recognition, in their applications. The recognition of emotions is a challenging problem, and research has been done on building classifiers to recognize emotion in the open world. Often, learned emotion models are trained on data sets that may not sufficiently represent a target population of interest. For example, many of these on-line services have focused on training and testing using a majority representation of adults and thus are tuned to the dynamics of mature faces. For applications designed to serve an older or younger age demographic, using the outputs from these pre-defined models may result in lower performance rates than when using a specialized classifier. Similar challenges with biases in performance arise in other situations where datasets in these large-scale on-line services have a non-representative ratio of the desired class of interest. We consider the challenge of providing application developers with the power to utilize pre-constructed cloud-based services in their applications while still ensuring satisfactory performance for their unique workload of cases. We focus on biases in emotion recognition as a representative scenario to evaluate an approach to improving recognition rates when an on-line pre-trained classifier is used for recognition of a class that may have a minority representation in the training set. We discuss a hierarchical classification approach to address this challenge and show that the average recognition rate associated with the most difficult emotion for the minority class increases by 41.5% and the overall recognition rate for all classes increases by 17.3% when using this approach.",
"title": ""
}
] |
scidocsrr
|
a254189588a62d5bcead728bfa07c8bc
|
How the relationship between the crisis life cycle and mass media content can better inform crisis communication .
|
[
{
"docid": "aaebd4defcc22d6b1e8e617ab7f3ec70",
"text": "In the American political process, news discourse concerning public policy issues is carefully constructed. This occurs in part because both politicians and interest groups take an increasingly proactive approach to amplify their views of what an issue is about. However, news media also play an active role in framing public policy issues. Thus, in this article, news discourse is conceived as a sociocognitive process involving all three players: sources, journalists, and audience members operating in the universe of shared culture and on the basis of socially defined roles. Framing analysis is presented as a constructivist approach to examine news discourse with the primary focus on conceptualizing news texts into empirically operationalizable dimensions—syntactical, script, thematic, and rhetorical structures—so that evidence of the news media's framing of issues in news texts may be gathered. This is considered an initial step toward analyzing the news discourse process as a whole. Finally, an extended empirical example is provided to illustrate the applications of this conceptual framework of news texts.",
"title": ""
}
] |
[
{
"docid": "ff1f503123ce012b478a3772fa9568b5",
"text": "Cementoblastoma is a rare odontogenic tumor that has distinct clinical and radiographical features normally suggesting the correct diagnosis. The clinicians and oral pathologists must have in mind several possible differential diagnoses that can lead to a misdiagnosed lesion, especially when unusual clinical features are present. A 21-year-old male presented with dull pain in lower jaw on right side. The clinical inspection of the region was non-contributory to the diagnosis but the lesion could be appreciated on palpation. A swelling was felt in the alveolar region of mandibular premolar-molar on right side. Radiographic examination was suggestive of benign cementoblastoma and the tumor was removed surgically along with tooth. The diagnosis was confirmed by histopathologic study. Although this neoplasm is rare, the dental practitioner should be aware of the clinical, radiographical and histopathological features that will lead to its early diagnosis and treatment.",
"title": ""
},
{
"docid": "d4e22e73965bcd9fdb1628711d6beb44",
"text": "This project is designed to measure heart beat (pulse count), by using embedded technology. In this project simultaneously it can measure and monitor the patient’s condition. This project describes the design of a simple, low-cost controller based wireless patient monitoring system. Heart rate of the patient is measured from the thumb finger using IRD (Infra Red Device sensor).Pulse counting sensor is arranged to check whether the heart rate is normal or not. So that a SMS is sent to the mobile number using GSM module interfaced to the controller in case of abnormal condition. A buzzer alert is also given. The heart rate can be measured by monitoring one's pulse using specialized medical devices such as an electrocardiograph (ECG), portable device e.g. The patient heart beat monitoring systems is one of the major wrist strap watch, or any other commercial heart rate monitors which normally consisting of a chest strap with electrodes. Despite of its accuracy, somehow it is costly, involve many clinical settings and patient must be attended by medical experts for continuous monitoring.",
"title": ""
},
{
"docid": "02effa562af44c07076b4ab853642945",
"text": "Purpose – The purpose of this paper is to explore the impact of corporate social responsibility (CSR) engagement on employee motivation, job satisfaction and organizational identification as well as employee citizenship in voluntary community activities. Design/methodology/approach – Employees (n 1⁄4 224) of a major airline carrier participated in the study based on a 54-item questionnaire, containing four different sets of items related to volunteering, motivation, job satisfaction and organizational identification. The employee sample consisted of two sub-samples drawn randomly from the company pool of employees, differentiating between active participants in the company’s CSR programs (APs) and non participants (NAPs). Findings – Significant differences were found between APs and NAPs on organizational identification and motivation, but not for job satisfaction. In addition, positive significant correlations between organizational identification, volunteering, job satisfaction, and motivation were obtained. These results are interpreted within the broader context that ties social identity theory (SIT) and organizational identification increase. Practical implications – The paper contributes to the understanding of the interrelations between CSR and other organizational behavior constructs. Practitioners can learn from this study how to increase job satisfaction and organizational identification. Both are extremely important for an organization’s sustainability. Originality/value – This is a first attempt to investigate the relationship between CSR, organizational identification and motivation, comparing two groups from the same organization. The paper discusses the questions: ‘‘Are there potential gains at the intra-organizational level in terms of enhanced motivation and organizational attitudes on the part of employees?’’ and ‘‘Does volunteering or active participation in CSR yield greater benefits for involved employees in terms of their motivation, job satisfaction and identification?’’.",
"title": ""
},
{
"docid": "5cf444f83a8b4b3f9482e18cea796348",
"text": "This paper investigates L-shaped iris (LSI) embedded in substrate integrated waveguide (SIW) structures. A lumped element equivalent circuit is utilized to thoroughly discuss the iris behavior in a wide frequency band. This structure has one more degree of freedom and design parameter compared with the conventional iris structures; therefore, it enables design flexibility with enhanced performance. The LSI is utilized to realize a two-pole evanescent-mode filter with an enhanced stopband and a dual-band filter combining evanescent and ordinary modes excitation. Moreover, a prescribed filtering function is demonstrated using the lumped element analysis not only including evanescent-mode pole, but also close-in transmission zero. The proposed LSI promises to substitute the conventional posts in (SIW) filter design.",
"title": ""
},
{
"docid": "09c19ae7eea50f269ee767ac6e67827b",
"text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.",
"title": ""
},
{
"docid": "a3b919ee9780c92668c0963f23983f82",
"text": "A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.",
"title": ""
},
{
"docid": "b90ec3edc349a98c41d1106b3c6628ba",
"text": "Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error backpropagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.",
"title": ""
},
{
"docid": "2802d66dfa1956bf83649614b76d470e",
"text": "Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.",
"title": ""
},
{
"docid": "26bc2aa9b371e183500e9c979c1fff65",
"text": "Complex regional pain syndrome (CRPS) is clinically characterized by pain, abnormal regulation of blood flow and sweating, edema of skin and subcutaneous tissues, trophic changes of skin, appendages of skin and subcutaneous tissues, and active and passive movement disorders. It is classified into type I (previously reflex sympathetic dystrophy) and type II (previously causalgia). Based on multiple evidence from clinical observations, experimentation on humans, and experimentation on animals, the hypothesis has been put forward that CRPS is primarily a disease of the central nervous system. CRPS patients exhibit changes which occur in somatosensory systems processing noxious, tactile and thermal information, in sympathetic systems innervating skin (blood vessels, sweat glands), and in the somatomotor system. This indicates that the central representations of these systems are changed and data show that CRPS, in particular type I, is a systemic disease involving these neuronal systems. This way of looking at CRPS shifts the attention away from interpreting the syndrome conceptually in a narrow manner and to reduce it to one system or to one mechanism only, e. g., to sympathetic-afferent coupling. It will further our understanding why CRPS type I may develop after a trivial trauma, after a trauma being remote from the affected extremity exhibiting CRPS, and possibly after immobilization of an extremity. It will explain why, in CRPS patients with sympathetically maintained pain, a few temporary blocks of the sympathetic innervation of the affected extremity sometimes lead to long-lasting (even permanent) pain relief and to resolution of the other changes observed in CRPS. This changed view will bring about a diagnostic reclassification and redefinition of CRPS and will have bearings on the therapeutic approaches. Finally it will shift the focus of research efforts.",
"title": ""
},
{
"docid": "4c39ff8119ddc75213251e7321c7e795",
"text": "Building and debugging distributed software remains extremely difficult. We conjecture that by adopting a data-centric approach to system design and by employing declarative programming languages, a broad range of distributed software can be recast naturally in a data-parallel programming model. Our hope is that this model can significantly raise the level of abstraction for programmers, improving code simplicity, speed of development, ease of software evolution, and program correctness.\n This paper presents our experience with an initial large-scale experiment in this direction. First, we used the Overlog language to implement a \"Big Data\" analytics stack that is API-compatible with Hadoop and HDFS and provides comparable performance. Second, we extended the system with complex distributed features not yet available in Hadoop, including high availability, scalability, and unique monitoring and debugging facilities. We present both quantitative and anecdotal results from our experience, providing some concrete evidence that both data-centric design and declarative languages can substantially simplify distributed systems programming.",
"title": ""
},
{
"docid": "ccc70871f57f25da6141a7083bdf5174",
"text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap",
"title": ""
},
{
"docid": "a346607a5e2e6c48e07e3e34a2ec7b0d",
"text": "The development and professionalization of a video game requires tools for analyzing the practice of the players and teams, their tactics and strategies. These games are very popular and by nature numerical, they provide many tracks that we analyzed in terms of team play. We studied Defense of the Ancients (DotA), a Multiplayer Online Battle Arena (MOBA), where two teams battle in a game very similar to rugby or American football. Through topological measures – area of polygon described by the players, inertia, diameter, distance to the base – that are independent of the exact nature of the game, we show that the outcome of the match can be relevantly predicted. Mining e-sport’s tracks is opening interest in further application of these tools for analyzing real time sport. © 2014. Published by Elsevier B.V. Selection and/or peer review under responsibility of American Applied Science Research Institute",
"title": ""
},
{
"docid": "616b6db46d3a01730c3ea468b0a03fc5",
"text": "We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research. Where existing work often compares against random or majority class baselines, we argue that unimodal approaches better capture and reflect dataset biases and therefore provide an important comparison when assessing the performance of multimodal techniques. We present unimodal ablations on three recent datasets in visual navigation and QA, seeing an up to 29% absolute gain in performance over published baselines.",
"title": ""
},
{
"docid": "119c20c537f833731965e0d8aeba0964",
"text": "The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.",
"title": ""
},
{
"docid": "bb815929889d93e19c6581c3f9a0b491",
"text": "This paper presents an HMM-MLP hybrid system to recognize complex date images written on Brazilian bank cheques. The system first segments implicitly a date image into sub-fields through the recognition process based on an HMM-based approach. Afterwards, the three obligatory date sub-fields are processed by the system (day, month and year). A neural approach has been adopted to work with strings of digits and a Markovian strategy to recognize and verify words. We also introduce the concept of meta-classes of digits, which is used to reduce the lexicon size of the day and year and improve the precision of their segmentation and recognition. Experiments show interesting results on date recognition.",
"title": ""
},
{
"docid": "2f2e5d62475918dc9cfd54522f480a11",
"text": "In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.",
"title": ""
},
{
"docid": "b84d8b711738bbd889a3a88ba82f45c0",
"text": "Transmission over wireless channel is challenging. As such, different application required different signal processing approach of radio system. So, a highly reconfigurable radio system is on great demand as the traditional fixed and embedded radio system are not viable to cater the needs for frequently change requirements of wireless communication. A software defined radio or better known as an SDR, is a software-based radio platform that offers flexibility to deliver the highly reconfigurable system requirements. This approach allows a different type of communication system requirements such as standard, protocol, or signal processing method, to be deployed by using the same set of hardware and software such as USRP and GNU Radio respectively. For researchers, this approach has opened the door to extend their studies in simulation domain into experimental domain. However, the realization of SDR concept is inherently limited by the analog components of the hardware being used. Despite that, the implementation of SDR is still new yet progressing, thus, this paper intends to provide an insight about its viability as a high re-configurable platform for communication system. This paper presents the SDR-based transceiver of common digital modulation system by means of GNU Radio and USRP.",
"title": ""
},
{
"docid": "60a655d6b6d79f55151e871d2f0d4d34",
"text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel",
"title": ""
},
{
"docid": "d80d52806cbbdd6148e3db094eabeed7",
"text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.",
"title": ""
},
{
"docid": "3e0f74c880165b5147864dfaa6a75c11",
"text": "Traditional hollow metallic waveguide manufacturing techniques are readily capable of producing components with high-precision geometric tolerances, yet generally lack the ability to customize individual parts on demand or to deliver finished components with low lead times. This paper proposes a Rapid-Prototyping (RP) method for relatively low-loss millimeter-wave hollow waveguides produced using consumer-grade stere-olithographic (SLA) Additive Manufacturing (AM) technology, in conjunction with an electroless metallization process optimized for acrylate-based photopolymer substrates. To demonstrate the capabilities of this particular AM process, waveguide prototypes are fabricated for the W- and D-bands. The measured insertion loss at W-band is between 0.12 dB/in to 0.25 dB/in, corresponding to a mean value of 0.16 dB/in. To our knowledge, this is the lowest insertion loss figure presented to date, when compared to other W-Band AM waveguide designs reported in the literature. Printed D-band waveguide prototypes exhibit a transducer loss of 0.26 dB/in to 1.01 dB/in, with a corresponding mean value of 0.65 dB/in, which is similar performance to a commercial metal waveguide.",
"title": ""
}
] |
scidocsrr
|
206bd53c2f28475975d72bf44504f279
|
Learning Clip Representations for Skeleton-Based 3D Action Recognition
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "0d16b2f41e4285a5b89b31ed16f378a8",
"text": "Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.",
"title": ""
}
] |
[
{
"docid": "743aeaa668ba32e6561e9e62015e24cd",
"text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.",
"title": ""
},
{
"docid": "8d91b88e9f57181e9c5427b8578bc322",
"text": "AIM\n This paper reports on a study that looked at the characteristics of exemplary nurse leaders in times of change from the perspective of frontline nurses.\n\n\nBACKGROUND\n Large-scale changes in the health care system and their associated challenges have highlighted the need for strong leadership at the front line.\n\n\nMETHODS\n In-depth personal interviews with open-ended questions were the primary means of data collection. The study identified and explored six frontline nurses' perceptions of the qualities of nursing leaders through qualitative content analysis. This study was validated by results from the current literature.\n\n\nRESULTS\n The frontline nurses described several common characteristics of exemplary nurse leaders, including: a passion for nursing; a sense of optimism; the ability to form personal connections with their staff; excellent role modelling and mentorship; and the ability to manage crisis while guided by a set of moral principles. All of these characteristics pervade the current literature regarding frontline nurses' perspectives on nurse leaders.\n\n\nCONCLUSION\n This study identified characteristics of nurse leaders that allowed them to effectively assist and support frontline nurses in the clinical setting.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n The findings are of significance to leaders in the health care system and in the nursing profession who are in a position to foster development of leaders to mentor and encourage frontline nurses.",
"title": ""
},
{
"docid": "f4fc99eebfea1d5c899b956430ee896e",
"text": "Searchable Encryption (SE) has been extensively examined by both academic and industry researchers. While many academic SE schemes show provable security, they usually expose some query information (e.g., search and access patterns) to achieve high efficiency. However, several inference attacks have exploited such leakage, e.g., a query recovery attack can convert opaque query trapdoors to their corresponding keywords based on some prior knowledge. On the other hand, many proposed SE schemes require significant modification of existing applications, which makes them less practical, weak in usability, and difficult to deploy. In this paper, we introduce a secure and practical searchable symmetric encryption scheme with provable security strength for cloud applications, called IDCrypt, which improves the search efficiency, and enhances the security strength of SE using symmetric cryptography. We further point out the main challenges in securely searching on multiple indexes and sharing encrypted data between multiple users. To address the above issues, we propose a token-adjustment search scheme to preserve the search functionality among multi-indexes, and a key sharing scheme which combines identity-based encryption and public-key encryption. Our experimental results show that the overhead of the key sharing scheme is fairly low.",
"title": ""
},
{
"docid": "67995490350c68f286029d8b401d78d8",
"text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "22629b96f1172328e654ea6ed6dccd92",
"text": "This paper uses the case of contract manufacturing in the electronics industry to illustrate an emergent American model of industrial organization, the modular production network. Lead firms in the modular production network concentrate on the creation, penetration, and defense of markets for end products—and increasingly the provision of services to go with them—while manufacturing capacity is shifted out-of-house to globally-operating turn-key suppliers. The modular production network relies on codified inter-firm links and the generic manufacturing capacity residing in turn-key suppliers to reduce transaction costs, build large external economies of scale, and reduce risk for network actors. I test the modular production network model against some of the key theoretical tools that have been developed to predict and explain industry structure: Joseph Schumpeter's notion of innovation in the giant firm, Alfred Chandler's ideas about economies of speed and the rise of the modern corporation, Oliver Williamson's transaction cost framework, and a range of other production network models that appear in the literature. I argue that the modular production network yields better economic performance in the context of globalization than more spatially and socially embedded network models. I view the emergence of the modular production network as part of a historical process of industrial transformation in which nationally-specific models of industrial organization co-evolve in intensifying rounds of competition, diffusion, and adaptation.",
"title": ""
},
{
"docid": "6a1da115f887498370b400efa6e57ed0",
"text": "Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "cb47cc2effac1404dd60a91a099699d1",
"text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.",
"title": ""
},
{
"docid": "040d39a7bf861a05cbd10fda9c0a1576",
"text": "Skin laceration repair is an important skill in family medicine. Sutures, tissue adhesives, staples, and skin-closure tapes are options in the outpatient setting. Physicians should be familiar with various suturing techniques, including simple, running, and half-buried mattress (corner) sutures. Although suturing is the preferred method for laceration repair, tissue adhesives are similar in patient satisfaction, infection rates, and scarring risk in low skin-tension areas and may be more cost-effective. The tissue adhesive hair apposition technique also is effective in repairing scalp lacerations. The sting of local anesthesia injections can be lessened by using smaller gauge needles, administering the injection slowly, and warming or buffering the solution. Studies have shown that tap water is safe to use for irrigation, that white petrolatum ointment is as effective as antibiotic ointment in postprocedure care, and that wetting the wound as early as 12 hours after repair does not increase the risk of infection. Patient education and appropriate procedural coding are important after the repair.",
"title": ""
},
{
"docid": "8c46f24d8e710c5fb4e25be76fc5b060",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "245c02139f875fac756dc17d1a2fc6c2",
"text": "This paper tries to answer two questions. First, how to infer real-time air quality of any arbitrary location given environmental data and historical air quality data from very sparse monitoring locations. Second, if one needs to establish few new monitoring stations to improve the inference quality, how to determine the best locations for such purpose? The problems are challenging since for most of the locations (>99%) in a city we do not have any air quality data to train a model from. We design a semi-supervised inference model utilizing existing monitoring data together with heterogeneous city dynamics, including meteorology, human mobility, structure of road networks, and point of interests (POIs). We also propose an entropy-minimization model to suggest the best locations to establish new monitoring stations. We evaluate the proposed approach using Beijing air quality data, resulting in clear advantages over a series of state-of-the-art and commonly used methods.",
"title": ""
},
{
"docid": "989a16f498eaaa62d5578cc1bcc8bc04",
"text": "UML activity diagram is widely used to describe the behavior of the software system. Unfortunately, there is still no practical tool to verify the UML diagrams automatically. This paper proposes an alternative to translate UML activity diagram into a colored petri nets with inscription. The model translation rules are proposed to guide the automatic translation of the activity diagram with atomic action into a CPN model. Moreover, the relevant basic arc inscriptions are generated without manual elaboration. The resulting CPN with inscription is correctly verified as expected.",
"title": ""
},
{
"docid": "d394d5d1872bbb6a38c28ecdc0e24f06",
"text": "An ever increasing number of configuration parameters are provided to system users. But many users have used one configuration setting across different workloads, leaving untapped the performance potential of systems. A good configuration setting can greatly improve the performance of a deployed system under certain workloads. But with tens or hundreds of parameters, it becomes a highly costly task to decide which configuration setting leads to the best performance. While such task requires the strong expertise in both the system and the application, users commonly lack such expertise.\n To help users tap the performance potential of systems, we present Best Config, a system for automatically finding a best configuration setting within a resource limit for a deployed system under a given application workload. BestConfig is designed with an extensible architecture to automate the configuration tuning for general systems. To tune system configurations within a resource limit, we propose the divide-and-diverge sampling method and the recursive bound-and-search algorithm. BestConfig can improve the throughput of Tomcat by 75%, that of Cassandra by 63%, that of MySQL by 430%, and reduce the running time of Hive join job by about 50% and that of Spark join job by about 80%, solely by configuration adjustment.",
"title": ""
},
{
"docid": "72ddcb7a55918a328576a811a89d245b",
"text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.",
"title": ""
},
{
"docid": "2d8f92f752bd1b4756e991a1f7e70926",
"text": "We present a new method to auto-adjust camera exposure for outdoor robotics. In outdoor environments, scene dynamic range may be wider than the dynamic range of the cameras due to sunlight and skylight. This can results in failures of vision-based algorithms because important image features are missing due to under-/over-saturation. To solve the problem, we adjust camera exposure to maximize image features in the gradient domain. By exploiting the gradient domain, our method naturally determines the proper exposure needed to capture important image features in a manner that is robust against illumination conditions. The proposed method is implemented using an off-the-shelf machine vision camera and is evaluated using outdoor robotics applications. Experimental results demonstrate the effectiveness of our method, which improves the performance of robot vision algorithms.",
"title": ""
},
{
"docid": "fedcb2bd51b9fd147681ae23e03c7336",
"text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.",
"title": ""
},
{
"docid": "3630c575bf7b5250930c7c54d8a1c6d0",
"text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.",
"title": ""
},
{
"docid": "53598a996f31476b32871cf99f6b84f0",
"text": "The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track included three tasks involving: (1A) identifying relationships between citing documents and the referred document, (1B) classifying the discourse facets, and (2) generating the abstractive summary. The dataset comprised 30 annotated sets of citing and reference papers from the open access research papers in the CL domain. This overview paper describes the participation and the official results of the second CL-SciSumm Shared Task, organized as a part of the Joint Workshop onBibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2016), held in New Jersey,USA in June, 2016. The annotated dataset used for this shared task and the scripts used for evaluation can be accessed and used by the community at: https://github.com/WING-NUS/scisumm-corpus.",
"title": ""
},
{
"docid": "daf2c30e059694007c2ba84cab916e07",
"text": "The field of multi-agent system (MAS) is an active area of research within artificial intelligence, with an increasingly important impact in industrial and other real-world applications. In a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as a prominent agent model to govern the agents’ autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have been proposed to enable support of MAS in complex, real-time, and uncertain environments. This survey provides an overview of the DCOP model, offering a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.",
"title": ""
},
{
"docid": "7b8fc21d27c9eb7c8e1df46eec7d6b6d",
"text": "This paper examines two methods - magnet shifting and optimizing the magnet pole arc - for reducing cogging torque in permanent magnet machines. The methods were applied to existing machine designs and their performance was calculated using finite-element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly eliminated using the two methods. However, there was some residual cogging in the prototypes due to manufacturing difficulties. In both methods, the back electromotive force was improved by reducing harmonics while preserving the magnitude.",
"title": ""
}
] |
scidocsrr
|
654b317f6f6a4ed8f2ab415c90d71dac
|
Deep Compositional Cross-modal Learning to Rank via Local-Global Alignment
|
[
{
"docid": "f2603a583b63c1c8f350b3ddabe16642",
"text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.",
"title": ""
}
] |
[
{
"docid": "4aaea3737b3331f3e016018367c3040c",
"text": "BACKGROUND\nPedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate.\n\n\nMETHODS\nThis study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input.\n\n\nRESULTS\nThe resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections.\n\n\nCONCLUSIONS\nThe tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).",
"title": ""
},
{
"docid": "36cd44e476c59791acf37c7570232cfb",
"text": "In this paper, we show that it is feasible for a mobile phone to be used as an SOS beacon in an aerial search and rescue operation. We show with various experiments that we can reliably detect WiFi-enabled mobile phones from the air at distances up to 200 m. By using a custom mobile application that triggers WiFi scanning with the display off, we can simultaneously extend battery life and increase WiFi scanning frequency, compared to keeping the phone in the default scanning mode. Even if an application is not installed or used, our measurement study suggests that it may be possible to detect mobile devices from their background WiFi emissions alone.",
"title": ""
},
{
"docid": "82be3cafe24185b1f3c58199031e41ef",
"text": "UNLABELLED\nFamily-based therapy (FBT) is regarded as best practice for the treatment of eating disorders in children and adolescents. In FBT, parents play a vital role in bringing their child or adolescent to health; however, a significant minority of families do not respond to this treatment. This paper introduces a new model whereby FBT is enhanced by integrating emotion-focused therapy (EFT) principles and techniques with the aims of helping parents to support their child's refeeding and interruption of symptoms. Parents are also supported to become their child's 'emotion coach'; and to process any emotional 'blocks' that may interfere with their ability to take charge of recovery. A parent testimonial is presented to illustrate the integration of the theory and techniques of EFT in the FBT model. EFFT (Emotion-Focused Family Therapy) is a promising model of therapy for those families who require a more intense treatment to bring about recovery of an eating disorder.\n\n\nKEY PRACTITIONER MESSAGE\nMore intense therapeutic models exist for treatment-resistant eating disorders in children and adolescents. Emotion is a powerful healing tool in families struggling with an eating disorder. Working with parent's emotions and emotional reactions to their child's struggles has the potential to improve child outcomes.",
"title": ""
},
{
"docid": "adc310c02471d8be579b3bfd32c33225",
"text": "In this work, we put forward the notion of Worry-Free Encryption. This allows Alice to encrypt confidential information under Bob's public key and send it to him, without having to worry about whether Bob has the authority to actually access this information. This is done by encrypting the message under a hidden access policy that only allows Bob to decrypt if his credentials satisfy the policy. Our notion can be seen as a functional encryption scheme but in a public-key setting. As such, we are able to insist that even if the credential authority is corrupted, it should not be able to compromise the security of any honest user.\n We put forward the notion of Worry-Free Encryption and show how to achieve it for any polynomial-time computable policy, under only the assumption that IND-CPA public-key encryption schemes exist. Furthermore, we construct CCA-secure Worry-Free Encryption, efficiently in the random oracle model, and generally (but inefficiently) using simulation-sound non-interactive zero-knowledge proofs.",
"title": ""
},
{
"docid": "be89ea7764b6a22ce518bac03a8c7540",
"text": "In remote, rugged or sensitive environments ground based mapping for condition assessment of species is both time consuming and potentially destructive. The application of photogrammetric methods to generate multispectral imagery and surface models based on UAV imagery at appropriate temporal and spatial resolutions is described. This paper describes a novel method to combine processing of NIR and visible image sets to produce multiband orthoimages and DEM models from UAV imagery with traditional image location and orientation uncertainties. This work extends the capabilities of recently developed commercial software (Pix4UAV from Pix4D) to show that image sets of different modalities (visible and NIR) can be automatically combined to generate a 4 band orthoimage. Reconstruction initially uses all imagery sets (NIR and visible) to ensure all images are in the same reference frame such that a 4-band orthoimage can be created. We analyse the accuracy of this automatic process by using ground control points and an evaluation on the matching performance between images of different modalities is shown. By combining sub-decimetre multispectral imagery with high spatial resolution surface models and ground based observation it is possible to generate detailed maps of vegetation assemblages at the species level. Potential uses with other conservation monitoring are discussed.",
"title": ""
},
{
"docid": "2915218bc86d049d6b8e3a844a9768fd",
"text": "Power and energy systems are on the verge of a profound change where Smart Grid solutions will enhance their efficiency and flexibility. Advanced ICT and control systems are key elements of the Smart Grid to enable efficient integration of a high amount of renewable energy resources which in turn are seen as key elements of the future energy system. The corresponding distribution grids have to become more flexible and adaptable as the current ones in order to cope with the upcoming high share of energy from distributed renewable sources. The complexity of Smart Grids requires to consider and imply many components when a new application is designed. However, a holistic ICT-based approach for modelling, designing and validating Smart Grid developments is missing today. The goal of this paper therefore is to discuss an advanced design approach and the corresponding information model, covering system, application, control and communication aspects of Smart Grids.",
"title": ""
},
{
"docid": "b549ed594246ee9251488d73b8bf9b88",
"text": "Web classification is used in many security devices for preventing users to access selected web sites that are not allowed by the current security policy, as well for improving web search and for implementing contextual advertising. There are many commercial web classification services available on the market and a few publicly available web directory services. Unfortunately they mostly focus on English-speaking web sites, making them unsuitable for other languages in terms of classification reliability and coverage. This paper covers the design and implementation of a web-based classification tool for TLDs (Top Level Domain). Each domain is classified by analysing the main domain web site, and classifying it in categories according to its content. The tool has been successfully validated by classifying all the registered it. Internet domains, whose results are presented in this paper.",
"title": ""
},
{
"docid": "dc1c602709691d96edea1e64c4afa114",
"text": "The authors propose an integration of person-centered therapy, with its focus on the here and now of client awareness of self, and solution-focused therapy, with its future-oriented techniques that also raise awareness of client potentials. Although the two theories hold different assumptions regarding the therapist's role in facilitating client change, it is suggested that solution-focused techniques are often compatible for use within a person-centered approach. Further, solution-focused activities may facilitate the journey of becoming self-aware within the person-centered tradition. This article reviews the two theories, clarifying the similarities and differences. To illustrate the potential integration of the approaches, several types of solution-focused strategies are offered through a clinical example. (PsycINFO Database Record (c) 2011 APA, all rights reserved).",
"title": ""
},
{
"docid": "e21878a1409cf7cf031f85c6dd8d65fa",
"text": "Human CYP1A2 is one of the major CYPs in human liver and metabolizes a number of clinical drugs (e.g., clozapine, tacrine, tizanidine, and theophylline; n > 110), a number of procarcinogens (e.g., benzo[a]pyrene and aromatic amines), and several important endogenous compounds (e.g., steroids). CYP1A2 is subject to reversible and/or irreversible inhibition by a number of drugs, natural substances, and other compounds. The CYP1A gene cluster has been mapped on to chromosome 15q24.1, with close link between CYP1A1 and 1A2 sharing a common 5'-flanking region. The human CYP1A2 gene spans almost 7.8 kb comprising seven exons and six introns and codes a 515-residue protein with a molecular mass of 58,294 Da. The recently resolved CYP1A2 structure has a relatively compact, planar active site cavity that is highly adapted for the size and shape of its substrates. The architecture of the active site of 1A2 is characterized by multiple residues on helices F and I that constitutes two parallel substrate binding platforms on either side of the cavity. A large interindividual variability in the expression and activity of CYP1A2 has been observed, which is largely caused by genetic, epigenetic and environmental factors (e.g., smoking). CYP1A2 is primarily regulated by the aromatic hydrocarbon receptor (AhR) and CYP1A2 is induced through AhR-mediated transactivation following ligand binding and nuclear translocation. Induction or inhibition of CYP1A2 may provide partial explanation for some clinical drug interactions. To date, more than 15 variant alleles and a series of subvariants of the CYP1A2 gene have been identified and some of them have been associated with altered drug clearance and response and disease susceptibility. Further studies are warranted to explore the clinical and toxicological significance of altered CYP1A2 expression and activity caused by genetic, epigenetic, and environmental factors.",
"title": ""
},
{
"docid": "c24bd4156e65d57eda0add458304988c",
"text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.",
"title": ""
},
{
"docid": "fa0f02cde08a3cee4b691788815cb757",
"text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.",
"title": ""
},
{
"docid": "e39a7208e32c23164601ec608362de53",
"text": "We address the problem of describing people based on fine-grained clothing attributes. This is an important problem for many practical applications, such as identifying target suspects or finding missing people based on detailed clothing descriptions in surveillance videos or consumer photos. We approach this problem by first mining clothing images with fine-grained attribute labels from online shopping stores. A large-scale dataset is built with about one million images and fine-detailed attribute sub-categories, such as various shades of color (e.g., watermelon red, rosy red, purplish red), clothing types (e.g., down jacket, denim jacket), and patterns (e.g., thin horizontal stripes, houndstooth). As these images are taken in ideal pose/lighting/background conditions, it is unreliable to directly use them as training data for attribute prediction in the domain of unconstrained images captured, for example, by mobile phones or surveillance cameras. In order to bridge this gap, we propose a novel double-path deep domain adaptation network to model the data from the two domains jointly. Several alignment cost layers placed inbetween the two columns ensure the consistency of the two domain features and the feasibility to predict unseen attribute categories in one of the domains. Finally, to achieve a working system with automatic human body alignment, we trained an enhanced RCNN-based detector to localize human bodies in images. Our extensive experimental evaluation demonstrates the effectiveness of the proposed approach for describing people based on fine-grained clothing attributes.",
"title": ""
},
{
"docid": "605125a6801bd9aa190f177ee4f0cb1f",
"text": "One of the challenges in bio-computing is to enable the efficient use and inter-operation of a wide variety of rapidly-evolving computational methods to simulate, analyze, and understand the complex properties and interactions of molecular systems. In our laboratory we investigates several areas, including protein-ligand docking, protein-protein docking, and complex molecular assemblies. Over the years we have developed a number of computational tools such as molecular surfaces, phenomenological potentials, various docking and visualization programs which we use in conjunction with programs developed by others. The number of programs available to compute molecular properties and/or simulate molecular interactions (e.g., molecular dynamics, conformational analysis, quantum mechanics, distance geometry, docking methods, ab-initio methods) is large and growing rapidly. Moreover, these programs come in many flavors and variations, using different force fields, search techniques, algorithmic details (e.g., continuous space vs. discrete, Cartesian vs. torsional). Each variation presents its own characteristic set of advantages and limitations. These programs also tend to evolve rapidly and are usually not written as components, making it hard to get them to work together.",
"title": ""
},
{
"docid": "90c6cf2fd66683843a8dd549676727d5",
"text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.",
"title": ""
},
{
"docid": "eddcf41fe566b65540d147171ce50002",
"text": "This paper addresses the problem of virtual pedestrian autonomous navigation for crowd simulation. It describes a method for solving interactions between pedestrians and avoiding inter-collisions. Our approach is agent-based and predictive: each agent perceives surrounding agents and extrapolates their trajectory in order to react to potential collisions. We aim at obtaining realistic results, thus the proposed model is calibrated from experimental motion capture data. Our method is shown to be valid and solves major drawbacks compared to previous approaches such as oscillations due to a lack of anticipation. We first describe the mathematical representation used in our model, we then detail its implementation, and finally, its calibration and validation from real data.",
"title": ""
},
{
"docid": "bdcd0cad7a2abcb482b1a0755a2e7af4",
"text": "We present a novel attribute learning framework named Hypergraph-based Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the attribute relations in the data. Then the attribute prediction problem is casted as a regularized hypergraph cut problem, in which a collection of attribute projections is jointly learnt from the feature space to a hypergraph embedding space aligned with the attributes. The learned projections directly act as attribute classifiers (linear and kernelized). This formulation leads to a very efficient approach. By considering our model as a multi-graph cut task, our framework can flexibly incorporate other available information, in particular class label. We apply our approach to attribute prediction, Zero-shot and N-shot learning tasks. The results on AWA, USAA and CUB databases demonstrate the value of our methods in comparison with the state-of-the-art approaches.",
"title": ""
},
{
"docid": "89e97c0c62b054664ecd2542329e4540",
"text": "ion from the underlying big data technologies is needed to enable ease of use for data scientists, and for business users. Many of the techniques required for real-time, prescriptive analytics, such as predictive modelling, optimization, and simulation, are data and compute intensive. Combined with big data these require distributed storage and parallel, or distributed computing. At the same time many of the machine learning and data mining algorithms are not straightforward to parallelize. A recent survey (Paradigm 4 2014) found that “although 49 % of the respondent data scientists could not fit their data into relational databases anymore, only 48 % have used Hadoop or Spark—and of those 76 % said they could not work effectively due to platform issues”. This is an indicator that big data computing is too complex to use without sophisticated computer science know-how. One direction of advancement is for abstractions and high-level procedures to be developed that hide the complexities of distributed computing and machine learning from data scientists. The other direction of course will be more skilled data scientists, who are literate in distributed computing, or distributed computing experts becoming more literate in data science and statistics. Advances are needed for the following technologies: • Abstraction is a common tool in computer science. Each technology at first is cumbersome. Abstraction manages complexity so that the user (e.g., 13 Big Data in the Energy and Transport Sectors 241",
"title": ""
},
{
"docid": "257f00fc5a4b2a0addbd7e9cc2bf6fec",
"text": "Security experts have demonstrated numerous risks imposed by Internet of Things (IoT) devices on organizations. Due to the widespread adoption of such devices, their diversity, standardization obstacles, and inherent mobility, organizations require an intelligent mechanism capable of automatically detecting suspicious IoT devices connected to their networks. In particular, devices not included in a white list of trustworthy IoT device types (allowed to be used within the organizational premises) should be detected. In this research, Random Forest, a supervised machine learning algorithm, was applied to features extracted from network traffic data with the aim of accurately identifying IoT device types from the white list. To train and evaluate multi-class classifiers, we collected and manually labeled network traffic data from 17 distinct IoT devices, representing nine types of IoT devices. Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96% of test cases (on average), and white listed device types were correctly classified by their actual types in 99% of cases. Some IoT device types were identified quicker than others (e.g., sockets and thermostats were successfully detected within five TCP sessions of connecting to the network). Perfect detection of unauthorized IoT device types was achieved upon analyzing 110 consecutive sessions; perfect classification of white listed types required 346 consecutive sessions, 110 of which resulted in 99.49% accuracy. Further experiments demonstrated the successful applicability of classifiers trained in one location and tested on another. In addition, a discussion is provided regarding the resilience of our machine learning-based IoT white listing method to adversarial attacks.",
"title": ""
},
{
"docid": "7a9a7b888b9e3c2b82e6c089d05e2803",
"text": "Background:\nBullous pemphigoid (BP) is a chronic, autoimmune blistering skin disease that affects patients' daily life and psychosocial well-being.\n\n\nObjective:\nThe aim of the study was to evaluate the quality of life, anxiety, depression and loneliness in BP patients.\n\n\nMethods:\nFifty-seven BP patients and fifty-seven healthy controls were recruited for the study. The quality of life of each patient was assessed using the Dermatology Life Quality Index (DLQI) scale. Moreover, they were evaluated for anxiety and depression according to the Hospital Anxiety Depression Scale (HADS-scale), while loneliness was measured through the Loneliness Scale-Version 3 (UCLA) scale.\n\n\nResults:\nThe mean DLQI score was 9.45±3.34. Statistically significant differences on the HADS total scale and in HADS-depression subscale (p=0.015 and p=0.002, respectively) were documented. No statistically significant difference was found between the two groups on the HADS-anxiety subscale. Furthermore, significantly higher scores were recorded on the UCLA Scale compared with healthy volunteers (p=0.003).\n\n\nConclusion:\nBP had a significant impact on quality of life and the psychological status of patients, probably due to the appearance of unattractive lesions on the skin, functional problems and disease chronicity.",
"title": ""
},
{
"docid": "0ec7538bef6a3ad982b8935f6124127d",
"text": "New technology has been seen as a way for many businesses in the tourism industry to stay competitive and enhance their marketing campaign in various ways. AR has evolved as the buzzword of modern information technology and is gaining increasing attention in the media as well as through a variety of use cases. This trend is highly fostered across mobile applications as well as the hype of wearable computing triggered by Google’s Glass project to be launched in 2014. However, although research on AR has been conducted in various fields including the Urban Tourism industry, the majority of studies focus on technical aspects of AR, while others are tailored to specific applications. Therefore, this paper aims to examine the current implementation of AR in the Urban Tourism context and identifies areas of research and development that is required to guide the early stages of AR implementation in a purposeful way to enhance the tourist experience. The paper provides an overview of AR and examines the impacts AR has made on the economy. Hence, AR applications in Urban Tourism are identified and benefits of AR are discussed. Please cite this article as: Jung, T. and Han, D. (2014). Augmented Reality (AR) in Urban Heritage Tourism. e-Review of Tourism Research. (ISSN: 1941-5842) Augmented Reality (AR) in Urban Heritage Tourism Timothy Jung and Dai-In Han Department of Food and Tourism Management Manchester\t\r Metropolitan\t\r University,\t\r United\t\r Kingdom t.jung@mmu.ac.uk,\t\r d.han@mmu.ac.uk",
"title": ""
}
] |
scidocsrr
|
6f3e9e963475ed7ba90d1ede096a8d17
|
The Long-Term Benefits of Positive Self-Presentation via Profile Pictures, Number of Friends and the Initiation of Relationships on Facebook for Adolescents’ Self-Esteem and the Initiation of Offline Relationships
|
[
{
"docid": "8f978ac84eea44a593e9f18a4314342c",
"text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.",
"title": ""
},
{
"docid": "69982ee7465c4e2ab8a2bfc72a8bbb89",
"text": "This study examines if Facebook, one of the most popular social network sites among college students in the U.S., is related to attitudes and behaviors that enhance individuals’ social capital. Using data from a random web survey of college students across Texas (n = 2, 603), we find positive relationships between intensity of Facebook use and students’ life satisfaction, social trust, civic engagement, and political participation. While these findings should ease the concerns of those who fear that Facebook has mostly negative effects on young adults, the positive and significant associations between Facebook variables and social capital were small, suggesting that online social networks are not the most effective solution for youth disengagement from civic duty and democracy.",
"title": ""
},
{
"docid": "a671673f330bd2b1ec14aaca9f75981a",
"text": "The aim of this study was to contrast the validity of two opposing explanatory hypotheses about the effect of online communication on adolescents' well-being. The displacement hypothesis predicts that online communication reduces adolescents' well-being because it displaces time spent with existing friends, thereby reducing the quality of these friendships. In contrast, the stimulation hypothesis states that online communication stimulates well-being via its positive effect on time spent with existing friends and the quality of these friendships. We conducted an online survey among 1,210 Dutch teenagers between 10 and 17 years of age. Using mediation analyses, we found support for the stimulation hypothesis but not for the displacement hypothesis. We also found a moderating effect of type of online communication on adolescents' well-being: Instant messaging, which was mostly used to communicate with existing friends, positively predicted well-being via the mediating variables (a) time spent with existing friends and (b) the quality of these friendships. Chat in a public chatroom, which was relatively often used to talk with strangers, had no effect on adolescents' wellbeing via the mediating variables.",
"title": ""
},
{
"docid": "b3a1aba2e9a3cfc8897488bb058f3358",
"text": "The social networking site, Facebook, has gained an enormous amount of popularity. In this article, we review the literature on the factors contributing to Facebook use. We propose a model suggesting that Facebook use is motivated by two primary needs: (1) The need to belong and (2) the need for self-presentation. Demographic and cultural factors contribute to the need to belong, whereas neuroticism, narcissism, shyness, self-esteem and self-worth contribute to the need for self presentation. Areas for future research are discussed.",
"title": ""
}
] |
[
{
"docid": "6e7d5e2548e12d11afd3389b6d677a0f",
"text": "Internet marketing is a field that is continuing to grow, and the online auction concept may be defining a totally new and unique distribution alternative. Very few studies have examined auction sellers and their internet marketing strategies. This research examines the internet auction phenomenon as it relates to the marketing mix of online auction sellers. The data in this study indicate that, whilst there is great diversity among businesses that utilise online auctions, distinct cost leadership and differentiation marketing strategies are both evident. These two approaches are further distinguished in terms of the internet usage strategies employed by each group.",
"title": ""
},
{
"docid": "8a8edb63c041a01cbb887cd526b97eb0",
"text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.",
"title": ""
},
{
"docid": "71333997a4f9f38de0b53697d7b7cff1",
"text": "Environmental sustainability of a supply chain depends on the purchasing strategy of the supply chain members. Most of the earlier models have focused on cost, quality, lead time, etc. issues but not given enough importance to carbon emission for supplier evaluation. Recently, there is a growing pressure on supply chain members for reducing the carbon emission of their supply chain. This study presents an integrated approach for selecting the appropriate supplier in the supply chain, addressing the carbon emission issue, using fuzzy-AHP and fuzzy multi-objective linear programming. Fuzzy AHP (FAHP) is applied first for analyzing the weights of the multiple factors. The considered factors are cost, quality rejection percentage, late delivery percentage, green house gas emission and demand. These weights of the multiple factors are used in fuzzy multi-objective linear programming for supplier selection and quota allocation. An illustration with a data set from a realistic situation is presented to demonstrate the effectiveness of the proposed model. The proposed approach can handle realistic situation when there is information vagueness related to inputs. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "045a4622691d1ae85593abccb823b193",
"text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "7bd56fffe892775084dc23d3d9d43484",
"text": "Stars form in dense clouds of interstellar gas and dust. The residual dust surrounding a young star scatters and diffuses its light, making the star's \"cocoon\" of dust observable from Earth. The resulting structures, called reflection nebulae, are commonly very colorful in appearance due to wavelength-dependent effects in the scattering and extinction of light. The intricate interplay of scattering and extinction cause the color hues, brightness distributions, and the apparent shapes of such nebulae to vary greatly with viewpoint. We describe an interactive visualization tool for realistically rendering the appearance of arbitrary 3D dust distributions surrounding one or more illuminating stars. Our rendering algorithm is based on the physical models used in astrophysics research. The tool can be used to create virtual fly-throughs of reflection nebulae for interactive desktop visualizations, or to produce scientifically accurate animations for educational purposes, e.g., in planetarium shows. The algorithm is also applicable to investigate on-the-fly the visual effects of physical parameter variations, exploiting visualization technology to help gain a deeper and more intuitive understanding of the complex interaction of light and dust in real astrophysical settings.",
"title": ""
},
{
"docid": "9f746a67a960b01c9e33f6cd0fcda450",
"text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.",
"title": ""
},
{
"docid": "cec9f586803ffc8dc5868f6950967a1f",
"text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.",
"title": ""
},
{
"docid": "2a79464b8674b689239f4579043bd525",
"text": "In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage– retrieval stage–, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage–translation stage–, a novel translation model, called search engine guided NMT (SEG-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "103ebae051da74f14561e3fa976273b6",
"text": "Data-driven generative modeling has made remarkable progress by leveraging the power of deep neural networks. A reoccurring challenge is how to sample a rich variety of data from the entire target distribution, rather than only from the distribution of the training data. In other words, we would like the generative model to go beyond the observed training samples and learn to also generate “unseen” data. In our work, we present a generative neural network for shapes that is based on a part-based prior, where the key idea is for the network to synthesize shapes by varying both the shape parts and their compositions. Treating a shape not as an unstructured whole, but as a (re-)composable set of deformable parts, adds a combinatorial dimension to the generative process to enrich the diversity of the output, encouraging the generator to venture more into the “unseen”. We show that our part-based model generates richer variety of feasible shapes compared with a baseline generative model. To this end, we introduce two quantitative metrics to evaluate the ingenuity of the generative model and assess how well generated data covers both the training data and unseen data from the same target distribution.",
"title": ""
},
{
"docid": "16338883787b5a1ff4df2bb5e9d4f21a",
"text": "The next generations of large-scale data-centers and supercomputers demand optical interconnects to migrate to 400G and beyond. Microring modulators in silicon-photonics VLSI chips are promising devices to meet this demand due to their energy efficiency and compatibility with dense wavelength division multiplexed chip-to-chip optical I/O. Higher order pulse amplitude modulation (PAM) schemes can be exploited to mitigate their fundamental energy–bandwidth tradeoff at the system level for high data rates. In this paper, we propose an optical digital-to-analog converter based on a segmented microring resonator, capable of operating at 20 GS/s with improved linearity over conventional optical multi-level generators that can be used in a variety of applications such as optical arbitrary waveform generators and PAM transmitters. Using this technique, we demonstrate a PAM-4 transmitter that directly converts the digital data into optical levels in a commercially available 45-nm SOI CMOS process. We achieved 40-Gb/s PAM-4 transmission at 42-fJ/b modulator and driver energies, and 685-fJ/b total transmitter energy efficiency with an area bandwidth density of 0.67 Tb/s/mm2. The transmitter incorporates a thermal tuning feedback loop to address the thermal and process variations of microrings’ resonance wavelength. This scheme is suitable for system-on-chip applications with a large number of I/O links, such as switches and general-purpose and specialized processors in large-scale computing and storage systems.",
"title": ""
},
{
"docid": "3a3470d13c9c63af1a62ee7bc57a96ef",
"text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.",
"title": ""
},
{
"docid": "136ed8dc00926ceec6d67b9ab35e8444",
"text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.",
"title": ""
},
{
"docid": "d2b44c8d6a22eecb3626776a2e5c551c",
"text": "Genes and their protein products are essential molecular units of a living organism. The knowledge of their functions is key for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. The association of a gene or protein with its functions, described by controlled terms of biomolecular terminologies or ontologies, is named gene functional annotation. Very many and valuable gene annotations expressed through terminologies and ontologies are available. Nevertheless, they might include some erroneous information, since only a subset of annotations are reviewed by curators. Furthermore, they are incomplete by definition, given the rapidly evolving pace of biomolecular knowledge. In this scenario, computational methods that are able to quicken the annotation curation process and reliably suggest new annotations are very important. Here, we first propose a computational pipeline that uses different semantic and machine learning methods to predict novel ontology-based gene functional annotations; then, we introduce a new semantic prioritization rule to categorize the predicted annotations by their likelihood of being correct. Our tests and validations proved the effectiveness of our pipeline and prioritization of predicted annotations, by selecting as most likely manifold predicted annotations that were later confirmed.",
"title": ""
},
{
"docid": "6519ae37d66b3e5524318adc5070223e",
"text": "Powering cellular networks with renewable energy sources via energy harvesting (EH) have recently been proposed as a promising solution for green networking. However, with intermittent and random energy arrivals, it is challenging to provide satisfactory quality of service (QoS) in EH networks. To enjoy the greenness brought by EH while overcoming the instability of the renewable energy sources, hybrid energy supply (HES) networks that are powered by both EH and the electric grid have emerged as a new paradigm for green communications. In this paper, we will propose new design methodologies for HES green cellular networks with the help of Lyapunov optimization techniques. The network service cost, which addresses both the grid energy consumption and achievable QoS, is adopted as the performance metric, and it is optimized via base station assignment and power control (BAPC). Our main contribution is a low-complexity online algorithm to minimize the long-term average network service cost, namely, the Lyapunov optimization-based BAPC (LBAPC) algorithm. One main advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of channels and EH processes. To determine the network operation, we only need to solve a deterministic per-time slot problem, for which an efficient inner-outer optimization algorithm is proposed. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Finally, sample simulation results are presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "3be195643e5cb658935b20997f7ebdea",
"text": "We describe the structure and functionality of the Internet Cache Protocol (ICP) and its implementation in the Squid Web Caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, e ciency, security, and interaction with other aspects of Web tra c behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy.",
"title": ""
},
{
"docid": "9ad276fd2c5166c12c997fa2b7ec8292",
"text": "Recent years have witnessed the rapid proliferation and widespread adoption of a new class of information technologies, commonly known as social media. Researchers often rely on social network analysis (SNA) in attempting to understand these technologies, often without considering how the novel capabilities of social media platforms might affect the underlying theories of SNA, which were developed primarily through studies of offline social networks. This article outlines several key differences between traditional offline social networks and online social media networks by juxtaposing an established typology of social network research with a well-regarded definition of social media platforms that articulates four key features. The results show that at four major points of intersection, social media has considerable theoretical implications for SNA. In exploring these points of intersection, this study outlines a series of theoretically distinctive research questions for SNA in social media contexts. These points of intersection offer considerable opportunities for researchers to investigate the theoretical implications introduced by social media and lay the groundwork for a robust social media agenda potentially spanning multiple disciplines. ***FORTHCOMING AT MIS QUARTERLY, THEORY AND REVIEW***",
"title": ""
},
{
"docid": "e2de8284e14cb3abbd6e3fbcfb5bc091",
"text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.",
"title": ""
},
{
"docid": "c0a3bb7720bd79d496bcf6281f444411",
"text": "Do you dream to create good visualizations for your dataset simply like a Google search? If yes, our visionary systemDeepEye is committed to fulfill this task. Given a dataset and a keyword query, DeepEye understands the query intent, generates and ranks good visualizations. The user can pick the one he likes and do a further faceted search to easily navigate the visualizations. We detail the architecture of DeepEye, key components, as well as research challenges and opportunities.",
"title": ""
}
] |
scidocsrr
|
a3d4a6a12f2916ce2507956d3101f040
|
The interactive performance of SLIM: a stateless, thin-client architecture
|
[
{
"docid": "014f1369be6a57fb9f6e2f642b3a4926",
"text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.",
"title": ""
}
] |
[
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "be3d420dee60602b50a5ae5923c86a88",
"text": "We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the new layers for both feedforward and backpropagation. We show that deep networks, which are untrainable with conventional methods, will converge with DropIn layers interspersed in the architecture. In addition, we demonstrate that DropIn provides regularization during training in an analogous way as dropout. Experiments are described with the MNIST dataset and various expanded LeNet architectures, CIFAR-10 dataset with its architecture expanded from 3 to 11 layers, and on the ImageNet dataset with the AlexNet architecture expanded to 13 layers and the VGG 16-layer architecture.",
"title": ""
},
{
"docid": "0aa0c63a4617bf829753df08c5544791",
"text": "The paper discusses the application program interface (API). Most software projects reuse components exposed through APIs. In fact, current-day software development technologies are becoming inseparable from the large APIs they provide. An API is the interface to implemented functionality that developers can access to perform various tasks. APIs support code reuse, provide high-level abstractions that facilitate programming tasks, and help unify the programming experience. A study of obstacles that professional Microsoft developers faced when learning to use APIs uncovered challenges and resulting implications for API users and designers. The article focuses on the obstacles to learning an API. Although learnability is only one dimension of usability, there's a clear relationship between the two, in that difficult-to-use APIs are likely to be difficult to learn as well. Many API usability studies focus on situations where developers are learning to use an API. The author concludes that as APIs keep growing larger, developers will need to learn a proportionally smaller fraction of the whole. In such situations, the way to foster more efficient API learning experiences is to include more sophisticated means for developers to identify the information and the resources they need-even for well-designed and documented APIs.",
"title": ""
},
{
"docid": "4fabfd530004921901d09134ebfd0eae",
"text": "“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.",
"title": ""
},
{
"docid": "0bb8e4555509fbd898c01b6fb9ac9279",
"text": "The OASIS standard Devices Profile for Web Services (DPWS) enables the use of Web services on smart and resource-constrained devices, which are the cornerstones of the Internet of Things (IoT). DPWS sees a perspective of being able to build service-oriented and event-driven IoT applications on top of these devices with secure Web service capabilities and a seamless integration into existing World Wide Web infrastructure. We introduce DPWSim, a simulation toolkit to support the development of such applications. DPWSim allows developers to prototype, develop, and test IoT applications using the DPWS technology without the presence of physical devices. It also can be used for the collaboration between manufacturers, developers, and designers during the new product development process.",
"title": ""
},
{
"docid": "72f17106ad48b144ccab55b564fece7d",
"text": "We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the AAM [1]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.",
"title": ""
},
{
"docid": "4beb0193ce98da0cfd625da7a033d257",
"text": "BACKGROUND\nThere are well-established relations between personality and the heart, as evidenced by associations between negative emotions on the one hand, and coronary heart disease or chronic heart failure on the other. However, there are substantial gaps in our knowledge about relations between the heart and personality in healthy individuals. Here, we investigated whether amplitude patterns of the electrocardiogram (ECG) correlate with neurotisicm, extraversion, agreeableness, warmth, positive emotion, and tender-mindedness as measured with the Neuroticism-Extraversion-Openness (NEO) personality inventory. Specifically, we investigated (a) whether a cardiac amplitude measure that was previously reported to be related to flattened affectivity (referred to as Eκ values) would explain variance of NEO scores, and (b) whether correlations can be found between NEO scores and amplitudes of the ECG.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nNEO scores and rest ECGs were obtained from 425 healthy individuals. Neuroticism and positive emotion significantly differed between individuals with high and low Eκ values. In addition, stepwise cross-validated regressions indicated correlations between ECG amplitudes and (a) agreeableness, as well as (b) positive emotion.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results are the first to demonstrate that ECG amplitude patterns provide information about the personality of an individual as measured with NEO personality scales and facets. These findings open new perspectives for a more efficient personality assessment using cardiac measures, as well as for more efficient risk-stratification and pre-clinical diagnosis of individuals at risk for cardiac, affective and psychosomatic disorders.",
"title": ""
},
{
"docid": "6e1c0cd2b1cb993ab9e78f7aac846264",
"text": "the content of «technical» realization of three special methods during criminalistic cognition: criminalistic identification, criminalistic diagnostics and criminalistic classification. Criminalistic technics (as a system of knowledge) is a branch of the special part of criminalistic theory describing and explaining regularities of emergence of materially fixed traces during investigation of criminal offences. It’s for finding and examining concrete technical means, knowledge and skills are already worked out and recommended.",
"title": ""
},
{
"docid": "0d6d2413cbaaef5354cf2bcfc06115df",
"text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.",
"title": ""
},
{
"docid": "b68a728f4e737f293dca0901970b41fe",
"text": "With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, China is moving toward a trend of generating electricity from renewable wind resources. How to select a suitable wind farm becomes an important focus for stakeholders. This paper first briefly introduces wind farm and then develops its critical success criteria. A new multi-criteria decision-making (MCDM) model, based on the analytic hierarchy process (AHP) associated with benefits, opportunities, costs and risks (BOCR), is proposed to help select a suitable wind farm project. Multiple factors that affect the success of wind farm operations are analyzed by taking into account experts’ opinions, and a performance ranking of the wind farms is generated. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4c3a7002536a825b73607c45a6b36cb4",
"text": "In this article we take an empirical cross-country perspective to investigate the robustness and causality of the link between income inequality and crime rates. First, we study the correlation between the Gini index and, respectively, homicide and robbery rates along different dimensions of the data (within and between countries). Second, we examine the inequality-crime link when other potential crime determinants are controlled for. Third, we control for the likely joint endogeneity of income inequality in order to isolate its exogenous impact on homicide and robbery rates. Fourth, we control for the measurement error in crime rates by modelling it as both unobserved country-specific effects and random noise. Lastly, we examine the robustness of the inequality-crime link to alternative measures of inequality. The sample for estimation consists of panels of non-overlapping 5-year averages for 39 countries over 1965-95 in the case of homicides, and 37 countries over 1970-1994 in the case of robberies. We use a variety of statistical techniques, from simple correlations to regression analysis and from static OLS to dynamic GMM estimation. We find that crime rates and inequality are positively correlated (within each country and, particularly, between countries), and it appears that this correlation reflects causation from inequality to crime rates, even controlling for other crime determinants. * We are grateful for comments and suggestions from Francois Bourguignon, Dante Contreras, Francisco Ferreira, Edward Glaeser, Sam Peltzman, Debraj Ray, Luis Servén, and an anonymous referee. N. Loayza worked at the research group of the Central Bank of Chile during the preparation of the paper. This study was sponsored by the Latin American Regional Studies Program, The World Bank. The opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the institutions to which they are affiliated.",
"title": ""
},
{
"docid": "5d76b2578fa2aa05a607ab0a542ab81f",
"text": "60 A practical approach to the optimal design of precast, prestressed concrete highway bridge girder systems is presented. The approach aims at standardizing the optimal design of bridge systems, as opposed to standardizing girder sections. Structural system optimization is shown to be more relevant than conventional girder optimization for an arbitrarily chosen structural system. Bridge system optimization is defined as the optimization of both longitudinal and transverse bridge configurations (number of spans, number of girders, girder type, reinforcements and tendon layout). As a result, the preliminary design process is much simplified by using some developed design charts from which selection of the optimum bridge system, number and type of girders, and amounts of prestressed and non-prestressed reinforcements are easily obtained for a given bridge length, width and loading type.",
"title": ""
},
{
"docid": "05eeadabcb4b7599e8bbcee96f0147eb",
"text": "Convolutional Neural Network(CNN) becomes one of the most preferred deep learning method because of achieving superior success at solution of important problems of machine learning like pattern recognition, object recognition and classification. With CNN, high performance has been obtained in traffic sign recognition which is important for autonomous vehicles. In this work, two-stage hierarchical CNN structure is proposed. Signs are seperated into 9 main groups at the first stage by using structure similarity index. And then classes of each main group are subclassed with CNNs at the second stage. Performance of the network is measured on 43-classes GTSRB dataset and compared with other methods.",
"title": ""
},
{
"docid": "0df1a15c02c29d9462356641fbe78b43",
"text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.",
"title": ""
},
{
"docid": "65f2651ec987ece0de560d9ac65e06a8",
"text": "This paper describes neural network models that we prepared for the author profiling task of PAN@CLEF 2017. In previous PAN series, statistical models using a machine learning method with a variety of features have shown superior performances in author profiling tasks. We decided to tackle the author profiling task using neural networks. Neural networks have recently shown promising results in NLP tasks. Our models integrate word information and character information with multiple neural network layers. The proposed models have marked joint accuracies of 64–86% in the gender identification and the language variety identification of four languages.",
"title": ""
},
{
"docid": "6c0b700a5c195cdf58175b5253fd2aaa",
"text": "In this study, we propose a speaker-dependent WaveNet vocoder, a method of synthesizing speech waveforms with WaveNet, by utilizing acoustic features from existing vocoder as auxiliary features of WaveNet. It is expected that WaveNet can learn a sample-by-sample correspondence between speech waveform and acoustic features. The advantage of the proposed method is that it does not require (1) explicit modeling of excitation signals and (2) various assumptions, which are based on prior knowledge specific to speech. We conducted both subjective and objective evaluation experiments on CMUARCTIC database. From the results of the objective evaluation, it was demonstrated that the proposed method could generate high-quality speech with phase information recovered, which was lost by a mel-cepstrum vocoder. From the results of the subjective evaluation, it was demonstrated that the sound quality of the proposed method was significantly improved from mel-cepstrum vocoder, and the proposed method could capture source excitation information more accurately.",
"title": ""
},
{
"docid": "d7a2708fc70f6480d9026aeefce46610",
"text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.",
"title": ""
},
{
"docid": "d3d5f135cc2a09bf0dfc1ef88c6089b5",
"text": "In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government",
"title": ""
},
{
"docid": "c4dfe9eb3aa4d082e96815d8c610968d",
"text": "In this paper, we consider the problem of predicting demographics of geographic units given geotagged Tweets that are composed within these units. Traditional survey methods that offer demographics estimates are usually limited in terms of geographic resolution, geographic boundaries, and time intervals. Thus, it would be highly useful to develop computational methods that can complement traditional survey methods by offering demographics estimates at finer geographic resolutions, with flexible geographic boundaries (i.e. not confined to administrative boundaries), and at different time intervals. While prior work has focused on predicting demographics and health statistics at relatively coarse geographic resolutions such as the county-level or state-level, we introduce an approach to predict demographics at finer geographic resolutions such as the blockgroup-level. For the task of predicting gender and race/ethnicity counts at the blockgrouplevel, an approach adapted from prior work to our problem achieves an average correlation of 0.389 (gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms this prior approach with an average correlation of 0.671 (gender) and 0.692 (race).",
"title": ""
}
] |
scidocsrr
|
ff5fb7253b0f45f9669f6f94188bdf32
|
Adaptive, Model-driven Autoscaling for Cloud Applications
|
[
{
"docid": "40cea15a4fbe7f939a490ea6b6c9a76a",
"text": "An application provider leases resources (i.e., virtual machine instances) of variable configurations from a IaaS provider over some lease duration (typically one hour). The application provider (i.e., consumer) would like to minimize their cost while meeting all service level obligations (SLOs). The mechanism of adding and removing resources at runtime is referred to as autoscaling. The process of autoscaling is automated through the use of a management component referred to as an autoscaler. This paper introduces a novel autoscaling approach in which both cloud and application dynamics are modeled in the context of a stochastic, model predictive control problem. The approach exploits trade-off between satisfying performance related objectives for the consumer's application while minimizing their cost. Simulation results are presented demonstrating the efficacy of this new approach.",
"title": ""
},
{
"docid": "38d3dc6b5eb1dbf85b1a371b645a17da",
"text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.",
"title": ""
}
] |
[
{
"docid": "b2f1fca7a05423c06cea45600582520a",
"text": "In Software Abstractions Daniel Jackson introduces an approach tosoftware design that draws on traditional formal methods but exploits automated tools to find flawsas early as possible. This approach--which Jackson calls \"lightweight formal methods\" or\"agile modeling\"--takes from formal specification the idea of a precise and expressivenotation based on a tiny core of simple and robust concepts but replaces conventional analysis basedon theorem proving with a fully automated analysis that gives designers immediate feedback. Jacksonhas developed Alloy, a language that captures the essence of software abstractions simply andsuccinctly, using a minimal toolkit of mathematical notions. This revised edition updates the text,examples, and appendixes to be fully compatible with the latest version of Alloy (Alloy 4).The designer can use automated analysis not only to correct errors but also tomake models that are more precise and elegant. This approach, Jackson says, can rescue designersfrom \"the tarpit of implementation technologies\" and return them to thinking deeply aboutunderlying concepts. Software Abstractions introduces the key elements: a logic,which provides the building blocks of the language; a language, which adds a small amount of syntaxto the logic for structuring descriptions; and an analysis, a form of constraint solving that offersboth simulation (generating sample states and executions) and checking (finding counterexamples toclaimed properties).",
"title": ""
},
{
"docid": "035f780309fc777ece17cbfe4aabc01b",
"text": "The phenolic composition and antibacterial and antioxidant activities of the green alga Ulva rigida collected monthly for 12 months were investigated. Significant differences in antibacterial activity were observed during the year with the highest inhibitory effect in samples collected during spring and summer. The highest free radical scavenging activity and phenolic content were detected in U. rigida extracts collected in late winter (February) and early spring (March). The investigation of the biological properties of U. rigida fractions collected in spring (April) revealed strong antimicrobial and antioxidant activities. Ethyl acetate and n-hexane fractions exhibited substantial acetylcholinesterase inhibitory capacity with EC50 of 6.08 and 7.6 μg mL−1, respectively. The total lipid, protein, ash, and individual fatty acid contents of U. rigida were investigated. The four most abundant fatty acids were palmitic, oleic, linolenic, and eicosenoic acids.",
"title": ""
},
{
"docid": "b7c7984f10f5e55de0c497798b1d64ac",
"text": "The relationships between personality traits and performance are often assumed to be linear. This assumption has been challenged conceptually and empirically, but results to date have been inconclusive. In the current study, we took a theory-driven approach in systematically addressing this issue. Results based on two different samples generally supported our expectations of the curvilinear relationships between personality traits, including Conscientiousness and Emotional Stability, and job performance dimensions, including task performance, organizational citizenship behavior, and counterproductive work behaviors. We also hypothesized and found that job complexity moderated the curvilinear personality–performance relationships such that the inflection points after which the relationships disappear were lower for low-complexity jobs than they were for high-complexity jobs. This finding suggests that high levels of the two personality traits examined are more beneficial for performance in high- than low-complexity jobs. We conclude by discussing the implications of these findings for the use of personality in personnel selection.",
"title": ""
},
{
"docid": "9ac6a33be64cbdd46a4d2a8bd101f9b5",
"text": "Cloud computing and Internet of Things (IoT) are computing technologies that provide services to consumers and businesses, allowing organizations to become more agile and flexible. Therefore, ensuring quality of service (QoS) through service-level agreements (SLAs) for such cloud-based services is crucial for both the service providers and service consumers. As SLAs are critical for cloud deployments and wider adoption of cloud services, the management of SLAs in cloud and IoT has thus become an important and essential aspect. This paper investigates the existing research on the management of SLAs in IoT applications that are based on cloud services. For this purpose, a systematic mapping study (a well-defined method) is conducted to identify the published research results that are relevant to SLAs. This paper identifies 328 primary studies and categorizes them into seven main technical classifications: SLA management, SLA definition, SLA modeling, SLA negotiation, SLA monitoring, SLA violation and trustworthiness, and SLA evolution. This paper also summarizes the research types, research contributions, and demographic information in these studies. The evaluation of the results shows that most of the approaches for managing SLAs are applied in academic or controlled experiments with limited industrial settings rather than in real industrial environments. Many studies focus on proposal models and methods to manage SLAs, and there is a lack of focus on the evolution perspective and a lack of adequate tool support to facilitate practitioners in their SLA management activities. Moreover, the scarce number of studies focusing on concrete metrics for qualitative or quantitative assessment of QoS in SLAs urges the need for in-depth research on metrics definition and measurements for SLAs.",
"title": ""
},
{
"docid": "e99c12645fd14528a150f915b3849c2b",
"text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA",
"title": ""
},
{
"docid": "4627d8e86bec798979962847523cc7e0",
"text": "Consuming news over online media has witnessed rapid growth in recent years, especially with the increasing popularity of social media. However, the ease and speed with which users can access and share information online facilitated the dissemination of false or unverified information. One way of assessing the credibility of online news stories is by examining the attached images. These images could be fake, manipulated or not belonging to the context of the accompanying news story. Previous attempts to news verification provided the user with a set of related images for manual inspection. In this work, we present a semi-automatic approach to assist news-consumers in instantaneously assessing the credibility of information in hypertext news articles by means of meta-data and feature analysis of images in the articles. In the first phase, we use a hybrid approach including image and text clustering techniques for checking the authenticity of an image. In the second phase, we use a hierarchical feature analysis technique for checking the alteration in an image, where different sets of features, such as edges and SURF, are used. In contrast to recently reported manual news verification, our presented work shows a quantitative measurement on a custom dataset. Results revealed an accuracy of 72.7% for checking the authenticity of attached images with a dataset of 55 articles. Finding alterations in images resulted in an accuracy of 88% for a dataset of 50 images.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "5525b8ddce9a8a6430da93f48e93dea5",
"text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.",
"title": ""
},
{
"docid": "bd3b9d9e8a1dc39f384b073765175de6",
"text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.",
"title": ""
},
{
"docid": "e3db1429e8821649f35270609459cb0d",
"text": "Novelty detection is the task of recognising events the differ from a model of normality. This paper proposes an acoustic novelty detector based on neural networks trained with an adversarial training strategy. The proposed approach is composed of a feature extraction stage that calculates Log-Mel spectral features from the input signal. Then, an autoencoder network, trained on a corpus of “normal” acoustic signals, is employed to detect whether a segment contains an abnormal event or not. A novelty is detected if the Euclidean distance between the input and the output of the autoencoder exceeds a certain threshold. The innovative contribution of the proposed approach resides in the training procedure of the autoencoder network: instead of using the conventional training procedure that minimises only the Minimum Mean Squared Error loss function, here we adopt an adversarial strategy, where a discriminator network is trained to distinguish between the output of the autoencoder and data sampled from the training corpus. The autoencoder, then, is trained also by using the binary cross-entropy loss calculated at the output of the discriminator network. The performance of the algorithm has been assessed on a corpus derived from the PASCAL CHiME dataset. The results showed that the proposed approach provides a relative performance improvement equal to 0.26% compared to the standard autoencoder. The significance of the improvement has been evaluated with a one-tailed z-test and resulted significant with p < 0.001. The presented approach thus showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8ca6e0b5c413cc228af0d64ce8cf9d3b",
"text": "On January 8, a Database Column reader asked for our views on new distributed database research efforts, and we'll begin here with our views on MapReduce. This is a good time to discuss it, since the recent trade press has been filled with news of the revolution of so-called \"cloud computing.\" This paradigm entails harnessing large numbers of (low-end) processors working in parallel to solve a computing problem. In effect, this suggests constructing a data center by lining up a large number of \"jelly beans\" rather than utilizing a much smaller number of high-end servers.",
"title": ""
},
{
"docid": "12c947a09e6dbaeca955b18900912b96",
"text": "A two stages car detection method using deformable part models with composite feature sets (DPM/CF) is proposed to recognize cars of various types and from multiple viewing angles. In the first stage, a HOG template is matched to detect the bounding box of the entire car of a certain type and viewed from a certain angle (called a t/a pair), which yields a region of interest (ROI). In the second stage, various part detectors using either HOG or the convolution neural network (CNN) features are applied to the ROI for validation. An optimization procedure based on latent logistic regression is adopted to select the optimal part detector's location, window size, and feature to use. Extensive experimental results indicate the proposed DPM/CF system can strike a balance between detection accuracy and training complexity.",
"title": ""
},
{
"docid": "69acb21a36cd8fc31978058897b35942",
"text": "Designing a driving policy for autonomous vehicles is a difficult task. Recent studies suggested an end-toend (E2E) training of a policy to predict car actuators directly from raw sensory inputs. It is appealing due to the ease of labeled data collection and since handcrafted features are avoided. Explicit drawbacks such as interpretability, safety enforcement and learning efficiency limit the practical application of the approach. In this paper, we amend the basic E2E architecture to address these shortcomings, while retaining the power of end-to-end learning. A key element in our proposed architecture is formulation of the learning problem as learning of trajectory. We also apply a Gaussian mixture model loss to contend with multi-modal data, and adopt a finance risk measure, conditional value at risk, to emphasize rare events. We analyze the effect of each concept and present driving performance in a highway scenario in the TORCS simulator. Video is available in this link.",
"title": ""
},
{
"docid": "91365154a173be8be29ef14a3a76b08e",
"text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.",
"title": ""
},
{
"docid": "6e5792c73b34eacc7bef2c8777da5147",
"text": "Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus.",
"title": ""
},
{
"docid": "c322b725e87bc9d9aad40e50b3696f0a",
"text": "In this paper we give a somewhat personal and perhaps biased overview of the field of Computer Vision. First, we define computer vision and give a very brief history of it. Then, we outline some of the reasons why computer vision is a very difficult research field. Finally, we discuss past, present, and future applications of computer vision. Especially, we give some examples of future applications which we think are very promising. 1 What is Computer Vision? Computer Vision has a dual goal. From the biological science point of view, computer vision aims to come up with computational models of the human visual system. From the engineering point of view, computer vision aims to build autonomous systems which could perform some of the tasks which the human visual system can perform (and even surpass it in many cases). Many vision tasks are related to the extraction of 3D and temporal information from time-varying 2D data such as obtained by one or more television cameras, and more generally the understanding of such dynamic scenes. Of course, the two goals are intimately related. The properties and characteristics of the human visual system often give inspiration to engineers who are designing computer vision systems. Conversely, computer vision algorithms can offer insights into how the human visual system works. In this paper we shall adopt the engineering point of view. 2 History of Computer Vision It is commonly accepted that the father of Computer Vision is Larry Roberts, who in his Ph.D. thesis (cir. 1960) at MIT discussed the possibilities of extracting 3D geometrical information from 2D perspective views of blocks (polyhedra) [1]. Many researchers, at MIT and elsewhere, in Artificial Intelligence, followed this work and studied computer vision in the context of the blocks world. Later, researchers realized that it was necessary to tackle images from the real world. Thus, much research was needed in the so called ``low-level” vision tasks such as edge detection and segmentation. A major milestone was the framework proposed by David Marr (cir. 1978) at MIT, who took a bottom-up approach to scene understanding [2]. Low-level image processing algorithms are applied to 2D images to obtain the ``primal sketch” (directed edge segments, etc.), from which a 2.5 D sketch of the scene is obtained using binocular stereo. Finally, high-level (structural analysis, a priori knowledge) techniques are used to get 3D model representations of the objects in the scene. This is probably the single most influential work in computer vision ever. Many researchers cried: ``From the paradigm created for us by Marr, no one can drive us out.” Nonetheless, more recently a number of computer vision researchers realized some of the limitation of Marr’s paradigm, and advocated a more top-down and heterogeneous approach. Basically, the program of Marr is extremely difficult to carry out, but more important, for many if not most computer vision applications, it is not necessary to get complete 3D object models. For example, in autonomous vehicle navigation using computer vision, it may be necessary to find out only whether an object is moving away from or toward your vehicle, but not the exact 3D motion of the object. This new paradigm is sometimes called ``Purposive Vision” implying that the algorithms should be goal driven and in many cases could be qualitative [3]. One of the main advocates of this new paradigm is Yiannis Aloimonos, University of Maryland. Looking over the history of computer vision, it is important to note that because of the broad spectrum of potential applications, the trend has been the merge of computer vision with other closely related fields. These include: Image processing (the raw images have to be processed before further analysis). Photogrammetry (cameras used for imaging have to be calibrated. Determining object poses in 3D is important in both computer vision and photogrammetry). Computer graphics (3D modeling is central to both computer vision and computer graphics. Many exciting applications need both computer vision and computer graphics see Section 4). 3 Why is Computer Vision Difficult? Computer Vision as a field of research is notoriously difficult. Almost no research problem has been satisfactorily solved. One main reason for this difficulty is that the human visual system is simply too good for many tasks (e.g., face recognition), so that computer vision systems suffer by comparison. A human can recognize faces under all kinds of variations in illumination, viewpoint, expression, etc. In most cases we have no difficulty in recognizing a friend in a photograph taken many years ago. Also, there appears to be no limit on how many faces we can store in our brains for future recognition. There appears no hope in building an autonomous system with such stellar performance. Two major related difficulties in computer vision can be identified: 1. How do we distill and represent the vast amount of human knowledge in a computer in such a way that retrieval is easy? 2. How do we carry out (in both hardware and software) the vast amount of computation that is often required in such a way that the task (such as face recognition) can be done in real time? 4 Application of Computer Vision: Past, Present, and Future Past and present applications of computer vision include: Autonomous navigation, robotic assembly, and industrial inspections. At best, the results have been mixed. (I am excluding industrial inspection applications which involve only 2D image processing and pattern. recognition.) The main difficulty is that computer vision algorithms are almost all brittle; an algorithm may work in some cases but not in others. My opinion is that in order for a computer vision application to be potentially successful, it has to satisfy two criteria: 1)Possibility of human interaction. 2) Forgiving (i.e., some mistakes are tolerable). It also needs to be emphasized that in many applications vision should be combined with other modalities (such as audio) to achieve the goals. Measured against these two criteria, some of the exciting computer vision applications which can be potentially very successful include: Image/video databases-Image content-based indexing and retrieval. Vision-based human computer interface e.g., using gesture (combined with speech) in interacting with virtual environments. Virtual agent/actor generating scenes of a synthetic person based on parameters extracted from video sequences of a real person. It is heartening to see that a number of researchers in computer vision have already started to delve into these and related applications. 5 Characterizing Human Facial Expressions: Smile To conclude this paper, we would like to give a very brief summary of a research project we are undertaking at our Institute which is relevant to two of the applications mentioned in the last Section, namely, vision-based human computer interface, and virtual agent/actors, as well as many other applications. Details of this project can be found in Ref. 4. Different people usually express their emotional feelings in different ways. An interesting question is number of canonical facial expressions for a given emotion. This would lead to applications in human computer interface, virtual agent/actor, as well as model-based video compression scenarios, such as video-phone. Take smile as an example. Suppose, by facial motion analysis, there are 16 categories found among all smiles posed by different people. Smiles within each category can be approximately represented by a single mile which could be called a canonical smile. The facial movements associated with each canonical smile can be designed in advance. A new smile is recognized and replaced by the canonical smile at the transmitting side, only the index of that canonical smile needs to be transmitted. At the receiving sides, this canonical smile will be reconstructed to express that person’s happiness. We are using an approach to the characterization of facial expressions based on the principal component analysis of the facial motion parameters. Smile is used as an example, however, the methodology can be generalized to other facial expressions. A database consisting of a number of different people’s smiles is first collected. Two frames are chosen from each smile sequence, a neutral face image and an image where the smile reaches its apex. The motion vectors of a set of feature points are derived from these two images and a feature space is created. Each smile is designated by a point in this feature space. The principal component analysis technique is used for dimension reduction and some preliminary results of smile characterization are obtained. Some dynamic characteristics of smile are also studied. For smiles, the most significant part on the face is the mouth. Therefore, four points around the mouth are chosen as the feature points for smile characterization: The two corners of the mouth and the mid-points of the upper and lower lip boundaries. About 60 people volunteered to show their smiles. These four points are identified in the two end frames of each smiling sequence, i.e., the neutral face image and the one in which the smile reaches its apex. The two face images are first registered based on some fixed features, e.g., the eye corners and the nostrils. In this way, the global motion of the head can be compensated for since only the local facial motions during smiles are of interest. Thus, every smile is represented by four vectors which point from the feature points on the neutral face image to the corresponding feature points on the smiling face image. These motion vectors are further normalized according to the two mouth corner points. Then, each component of these vectors serves as one dimension of the ``smile feature space.” In our experiments to date, these are 2D vectors. Thus, the dimensionality of the smile feature space is 8. Principal component",
"title": ""
},
{
"docid": "5640d9307fa3d1b611358d3f14d5fb4c",
"text": "An N-LDMOS ESD protection device with drain back and PESD optimization design is proposed. With PESD layer enclosing the N+ drain region, a parasitic SCR is created to achieve high ESD level. When PESD is close to gate, the turn-on efficiency can be further improved (Vt1: 11.2 V reduced to 7.2 V) by the punch-through path from N+/PESD to PW. The proposed ESD N-LDMOS can sustain over 8KV HBM with low trigger behavior without extra area cost.",
"title": ""
},
{
"docid": "7fdf51a07383b9004882c058743b5726",
"text": "We propose using application specific virtual machines (ASVMs) to reprogram deployed wireless sensor networks. ASVMs provide a way for a user to define an application-specific boundary between virtual code and the VM engine. This allows programs to be very concise (tens to hundreds of bytes), making program installation fast and inexpensive. Additionally, concise programs interpret few instructions, imposing very little interpretation overhead. We evaluate ASVMs against current proposals for network programming runtimes and show that ASVMs are more energy efficient by as much as 20%. We also evaluate ASVMs against hand built TinyOS applications and show that while interpretation imposes a significant execution overhead, the low duty cycles of realistic applications make the actual cost effectively unmeasurable.",
"title": ""
}
] |
scidocsrr
|
bd445e2d446f3b66df4aa5b7e1244e44
|
MathDQN: Solving Arithmetic Word Problems via Deep Reinforcement Learning
|
[
{
"docid": "0007c9ab00e628848a08565daaf4063e",
"text": "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"title": ""
},
{
"docid": "8fd830d62cceb6780d0baf7eda399fdf",
"text": "Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.",
"title": ""
},
{
"docid": "711d8291683bd23e2060b56ce7120f23",
"text": "Solving simple arithmetic word problems is one of the challenges in Natural Language Understanding. This paper presents a novel method to learn to use formulas to solve simple arithmetic word problems. Our system, analyzes each of the sentences to identify the variables and their attributes; and automatically maps this information into a higher level representation. It then uses that representation to recognize the presence of a formula along with its associated variables. An equation is then generated from the formal description of the formula. In the training phase, it learns to score the <formula, variables> pair from the systematically generated higher level representation. It is able to solve 86.07% of the problems in a corpus of standard primary school test questions and beats the state-of-the-art by",
"title": ""
},
{
"docid": "6eeeb343309fc24326ed42b62d5524b1",
"text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"title": ""
}
] |
[
{
"docid": "eb4ae32b55af8ed25122640bffafde39",
"text": "Unlike chemical synthesis, biological synthesis of nanoparticles is gaining tremendous interest, and plant extracts are preferred over other biological sources due to their ample availability and wide array of reducing metabolites. In this project, we investigated the reducing potential of aqueous extract of Artemisia absinthium L. for synthesizing silver nanoparticles (AgNPs). Optimal synthesis of AgNPs with desirable physical and biological properties was investigated using ultra violet-visible spectroscopy (UV-vis), dynamic light scattering (DLS), transmission electron microscopy (TEM) and energy-dispersive X-ray analysis (EDX). To determine their appropriate concentrations for AgNP synthesis, two-fold dilutions of silver nitrate (20 to 0.62 mM) and aqueous plant extract (100 to 0.79 mg ml(-1)) were reacted. The results showed that silver nitrate (2mM) and plant extract (10 mg ml(-1)) mixed in different ratios significantly affected size, stability and yield of AgNPs. Extract to AgNO3 ratio of 6:4v/v resulted in the highest conversion efficiency of AgNO3 to AgNPs, with the particles in average size range of less than 100 nm. Furthermore, the direct imaging of synthesized AgNPs by TEM revealed polydispersed particles in the size range of 5 to 20 nm. Similarly, nanoparticles with the characteristic peak of silver were observed with EDX. This study presents a comprehensive investigation of the differential behavior of plant extract and AgNO3 to synthesize biologically stable AgNPs.",
"title": ""
},
{
"docid": "7d197033396c7a55593da79a5a70fa96",
"text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.",
"title": ""
},
{
"docid": "8c52c67dde20ce0a50ea22aaa4f917a5",
"text": "This paper presents the vision of the Artificial Vision and Intelligent Systems Laboratory (VisLab) on future automated vehicles, ranging from sensor selection up to their extensive testing. VisLab's design choices are explained using the BRAiVE autonomous vehicle prototype as an example. BRAiVE, which is specifically designed to develop, test, and demonstrate advanced safety applications with different automation levels, features a high integration level and a low-cost sensor suite, which are mainly based on vision, as opposed to many other autonomous vehicle implementations based on expensive and invasive sensors. The importance of performing extensive tests to validate the design choices is considered to be a hard requirement, and different tests have been organized, including an intercontinental trip from Italy to China. This paper also presents the test, the main challenges, and the vehicles that have been specifically developed for this test, which was performed by four autonomous vehicles based on BRAiVE's architecture. This paper also includes final remarks on VisLab's perspective on future vehicles' sensor suite.",
"title": ""
},
{
"docid": "328ba61afa9b311a33d557999738864d",
"text": "In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.",
"title": ""
},
{
"docid": "3f0b6a3238cf60d7e5d23363b2affe95",
"text": "This paper presents a new strategy to control the generated power that comes from the energy sources existing in autonomous and isolated Microgrids. In this particular study, the power system consists of a power electronic converter supplied by a battery bank, which is used to form the AC grid (grid former converter), an energy source based on a wind turbine with its respective power electronic converter (grid supplier converter), and the power consumers (loads). The main objective of this proposed strategy is to control the state of charge of the battery bank limiting the voltage on its terminals by controlling the power generated by the energy sources. This is done without using dump loads or any physical communication among the power electronic converters or the individual energy source controllers. The electrical frequency of the microgrid is used to inform to the power sources and their respective converters the amount of power they need to generate in order to maintain the battery-bank state of charge below or equal its maximum allowable limit. It is proposed a modified droop control to implement this task.",
"title": ""
},
{
"docid": "21ce9ed056f5c54d0626e3a4e8224bcc",
"text": "This paper presents an application of evolutionary fuzzy classifier design to a road accident data analysis. A fuzzy classifier evolved by the genetic programming was used to learn the labeling of data in a real world road accident data set. The symbolic classifier was inspected in order to select important features and the relations among them. Selected features provide a feedback for traffic management authorities that can exploit the knowledge to improve road safety and mitigate the severity of traffic accidents.",
"title": ""
},
{
"docid": "2a225a33dc4d8cd08d0ae4a18d8b267c",
"text": "Support Vector Machines is a powerful methodology for solving problems in nonlinear classification, function estimation and density estimation which has also led recently to many new developments in kernel based learning in general. In these methods one solves convex optimization problems, typically quadratic programs. We focus on Least Squares Support Vector Machines which are reformulations to standard SVMs that lead to solving linear KKT systems. Least squares support vector machines are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primaldual interpretations from optimization theory. In view of interior point algorithms such LS-SVM KKT systems can be considered as a core problem. Where needed the obtained solutions can be robustified and/or sparsified. As an alternative to a top-down choice of the cost function, methods from robust statistics are employed in a bottom-up fashion for further improving the estimates. We explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. The framework is further extended towards unsupervised learning by considering PCA analysis and its kernel version as a one-class modelling problem. This leads to new primal-dual support vector machine formulations for kernel PCA and kernel canonical correlation analysis. Furthermore, LS-SVM formulations are mentioned towards recurrent networks and control, thereby extending the methods from static to dynamic problems. In general, support vector machines may pose heavy computational challenges for large data sets. For this purpose, we propose a method of Fixed Size LS-SVM where the estimation is done in the primal space in relation to a Nyström sampling with active selection of support vectors and we discuss extensions to committee networks. The methods will be illustrated by several benchmark and real-life applications.",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
},
{
"docid": "059463f31fcb83c346f96ed8345ff9a6",
"text": "Cancer incidence is projected to increase in the future and an effectual preventive strategy is required to face this challenge. Alteration of dietary habits is potentially an effective approach for reducing cancer risk. Assessment of biological effects of a specific food or bioactive component that is linked to cancer and prediction of individual susceptibility as a function of nutrient-nutrient interactions and genetics is an essential element to evaluate the beneficiaries of dietary interventions. In general, the use of biomarkers to evaluate individuals susceptibilities to cancer must be easily accessible and reliable. However, the response of individuals to bioactive food components depends not only on the effective concentration of the bioactive food components, but also on the target tissues. This fact makes the response of individuals to food components vary from one individual to another. Nutrigenomics focuses on the understanding of interactions between genes and diet in an individual and how the response to bioactive food components is influenced by an individual's genes. Nutrients have shown to affect gene expression and to induce changes in DNA and protein molecules. Nutrigenomic approaches provide an opportunity to study how gene expression is regulated by nutrients and how nutrition affects gene variations and epigenetic events. Finding the components involved in interactions between genes and diet in an individual can potentially help identify target molecules important in preventing and/or reducing the symptoms of cancer.",
"title": ""
},
{
"docid": "9dceccb7b171927a5cba5a16fd9d76c6",
"text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.",
"title": ""
},
{
"docid": "5000e96519cf477e6ab2ea35fd181046",
"text": "When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analyzed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential descriptors, by expressing them in terms of so-called γ-normalized derivatives, an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure. Specifically, sharp edges will be detected at fine scales so as to reduce the shape distortions due to scale-space smoothing, whereas sufficiently coarse scales will be selected at diffuse edges, such that an edge model is a valid abstraction of the intensity profile across the edge. Since the scale-space edge is defined from the intersection of two zero-crossing surfaces in scale-space, the edges will by definition form closed curves. This simplifies selection of salient edges, and a novel significance measure is proposed, by integrating the edge strength along the edge. Moreover, the scale information associated with each edge provides useful clues to the physical nature of the edge. With just slight modifications, similar ideas can be used for formulating ridge detectors with automatic selection, having the characteristic property that the selected scales on a scale-space ridge instead reflect the width of the ridge. It is shown how the methodology can be implemented in terms of straightforward visual front-end operations, and the validity of the approach is supported by theoretical analysis as well as experiments on real-world and synthetic data.",
"title": ""
},
{
"docid": "7ff084619d05d21975ff41748a260418",
"text": "In the development of speech recognition algorithms, it is important to know whether any apparent difference in performance of algorithms is statistically significant, yet this issue is almost always overlooked. We present two simple tests for deciding whether the difference in error-rates between two algorithms tested on the same data set is statistically significant. The first (McNemar’s test) requires the errors made by an algorithm to be independent events and is most appropriate for isolated word algorithms. The second (a matched-pairs test) can be used even when errors are not independent events and is more appropriate for connected speech.",
"title": ""
},
{
"docid": "a0903fc562ccd9dfe708afbef43009cd",
"text": "A stacked field-effect transistor (FET) linear cellular antenna switch adopting a transistor layout with odd-symmetrical drain-source metal wiring and an extremely low-power biasing strategy has been implemented in silicon-on-insulator CMOS technology. A multi-fingered switch-FET device with odd-symmetrical drain-source metal wiring is adopted herein to improve the insertion loss (IL) and isolation of the antenna switch by minimizing the product of the on-resistance and off-capacitance. To remove the spurious emission and digital switching noise problems from the antenna switch driver circuits, an extremely low-power biasing scheme driven by only positive bias voltage has been devised. The proposed antenna switch that employs the new biasing scheme shows almost the same power-handling capability and harmonic distortion as a conventional version based on a negative biasing scheme, while greatly reducing long start-up time and wasteful active current consumption in a stand-by mode of the conventional antenna switch driver circuits. The implemented single-pole four-throw antenna switch is perfectly capable of handling a high power signal up to +35 dBm with suitably low IL of less than 1 dB, and shows second- and third-order harmonic distortion of less than -45 dBm when a 1-GHz RF signal with a power of +35 dBm and a 2-GHz RF signal with a power of +33 dBm are applied. The proposed antenna switch consumes almost no static power.",
"title": ""
},
{
"docid": "216f23db607aabee32907bda19012b8e",
"text": "Stereo matching is one of the key technologies in stereo vision system due to its ultra high data bandwidth requirement, heavy memory accessing and algorithm complexity. To speed up stereo matching, various algorithms are implemented by different software and hardware processing methods. This paper presents a survey of stereo matching software and hardware implementation research status based on local and global algorithm analysis. Based on different processing platforms, including CPU, DSP, GPU, FPGA and ASIC, analysis are made on software or hardware realization performance, which is represented by frame rate, efficiency represented by MDES, and processing quality represented by error rate. Among them, GPU, FPGA and ASIC implementations are suitable for real-time embedded stereo matching applications, because they are low power consumption, low cost, and have high performance. Finally, further stereo matching optimization technologies are pointed out, including both algorithm and parallelism optimization for data bandwidth reduction and memory storage strategy.",
"title": ""
},
{
"docid": "7d42d3d197a4d62e1b4c0f3c08be14a9",
"text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.",
"title": ""
},
{
"docid": "99d3354d91a330e7b3bd3cc6204251ca",
"text": "PHACE syndrome is a neurocutaneous disorder characterized by large cervicofacial infantile hemangiomas and associated anomalies: posterior fossa brain malformation, hemangioma, arterial cerebrovascular anomalies, coarctation of the aorta and cardiac defects, and eye/endocrine abnormalities of the brain. When ventral developmental defects (sternal clefting or supraumbilical raphe) are present the condition is termed PHACE. In this report, we describe three PHACE cases that presented unique features (affecting one of the organ systems described for this syndrome) that have not been described previously. In the first case, a definitive PHACE association, the patient presented with an ipsilateral mesenteric lymphatic malformation, at the age of 14 years. In the second case, an anomaly of the posterior segment of the eye, not mentioned before in PHACE literature, a retinoblastoma, has been described. Specific chemotherapy avoided enucleation. And, in the third case, the child presented with an unusual midline frontal bone cleft, corresponding to Tessier 14 cleft. Two patients' hemangiomas responded well to propranolol therapy. The first one was followed and treated in the pre-propranolol era and had a moderate response to corticoids and interferon.",
"title": ""
},
{
"docid": "f779bf251b3d066e594867680e080ef4",
"text": "Machine Translation is area of research since six decades. It is gaining popularity since last decade due to better computational facilities available at personal computer systems. This paper presents different Machine Translation system where Sanskrit is involved as source, target or key support language. Researchers employ various techniques like Rule based, Corpus based, Direct for machine translation. The main aim to focus on Sanskrit in Machine Translation in this paper is to uncover the language suitability, its morphology and employ appropriate MT techniques.",
"title": ""
},
{
"docid": "5e681caab6212e3f82d482f2ac332a14",
"text": "Task-aware flow schedulers collect task information across the data center to optimize task-level performance. However, the majority of the tasks, which generate short flows and are called tiny tasks, have been largely overlooked by current schedulers. The large number of tiny tasks brings significant overhead to the centralized schedulers, while the existing decentralized schedulers are too complex to fit in commodity switches. In this paper we present OPTAS, a lightweight, commodity-switch-compatible scheduling solution that efficiently monitors and schedules flows for tiny tasks with low overhead. OPTAS monitors system calls and buffer footprints to recognize the tiny tasks, and assigns them with higher priorities than larger ones. The tiny tasks are then transferred in a FIFO manner by adjusting two attributes, namely, the window size and round trip time, of TCP. We have implemented OPTAS as a Linux kernel module, and experiments on our 37-server testbed show that OPTAS is at least 2.2× faster than fair sharing, and 1.2× faster than only assigning tiny tasks with the highest priority.",
"title": ""
},
{
"docid": "08df6cd44a26be6c4cc96082631a0e6e",
"text": "In the natural habitat of our ancestors, physical activity was not a preventive intervention but a matter of survival. In this hostile environment with scarce food and ubiquitous dangers, human genes were selected to optimize aerobic metabolic pathways and conserve energy for potential future famines.1 Cardiac and vascular functions were continuously challenged by intermittent bouts of high-intensity physical activity and adapted to meet the metabolic demands of the working skeletal muscle under these conditions. When speaking about molecular cardiovascular effects of exercise, we should keep in mind that most of the changes from baseline are probably a return to normal values. The statistical average of physical activity in Western societies is so much below the levels normal for our genetic background that sedentary lifestyle in combination with excess food intake has surpassed smoking as the No. 1 preventable cause of death in the United States.2 Physical activity has been shown to have beneficial effects on glucose metabolism, skeletal muscle function, ventilator muscle strength, bone stability, locomotor coordination, psychological well-being, and other organ functions. However, in the context of this review, we will focus entirely on important molecular effects on the cardiovascular system. The aim of this review is to provide a bird’s-eye view on what is known and unknown about the physiological and biochemical mechanisms involved in mediating exercise-induced cardiovascular effects. The resulting map is surprisingly detailed in some areas (ie, endothelial function), whereas other areas, such as direct cardiac training effects in heart failure, are still incompletely understood. For practical purposes, we have decided to use primarily an anatomic approach to present key data on exercise effects on cardiac and vascular function. For the cardiac effects, the left ventricle and the cardiac valves will be described separately; for the vascular effects, we will follow the arterial vascular tree, addressing changes in the aorta, the large conduit arteries, the resistance vessels, and the microcirculation before turning our attention toward the venous and the pulmonary circulation (Figure 1). Cardiac Effects of Exercise Left Ventricular Myocardium and Ventricular Arrhythmias The maintenance of left ventricular (LV) mass and function depends on regular exercise. Prolonged periods of physical inactivity, as studied in bed rest trials, lead to significant reductions in LV mass and impaired cardiac compliance, resulting in reduced upright stroke volume and orthostatic intolerance.3 In contrast, a group of bed rest subjects randomized to regular supine lower-body negative pressure treadmill exercise showed an increase in LV mass and a preserved LV stoke volume.4 In previously sedentary healthy subjects, a 12-week moderate exercise program induced a mild cardiac hypertrophic response as measured by cardiac magnetic resonance imaging.5 These findings highlight the plasticity of LV mass and function in relation to the current level of physical activity.",
"title": ""
},
{
"docid": "ec5ade0dd3aee92102934de27beb6b4f",
"text": "This paper covers the whole process of developing an Augmented Reality Stereoscopig Render Engine for the Oculus Rift. To capture the real world in form of a camera stream, two cameras with fish-eye lenses had to be installed on the Oculus Rift DK1 hardware. The idea was inspired by Steptoe [1]. After the introduction, a theoretical part covers all the most neccessary elements to achieve an AR System for the Oculus Rift, following the implementation part where the code from the AR Stereo Engine is explained in more detail. A short conclusion section shows some results, reflects some experiences and in the final chapter some future works will be discussed. The project can be accessed via the git repository https: // github. com/ MaXvanHeLL/ ARift. git .",
"title": ""
}
] |
scidocsrr
|
3875760461b1998be08f4c6af4a58c1f
|
Impact of digital control in power electronics
|
[
{
"docid": "1648a759d2487177af4b5d62407fd6cd",
"text": "This paper discusses the presence of steady-state limit cycles in digitally controlled pulse-width modulation (PWM) converters, and suggests conditions on the control law and the quantization resolution for their elimination. It then introduces single-phase and multi-phase controlled digital dither as a means of increasing the effective resolution of digital PWM (DPWM) modules, allowing for the use of low resolution DPWM units in high regulation accuracy applications. Bounds on the number of bits of dither that can be used in a particular converter are derived.",
"title": ""
}
] |
[
{
"docid": "fc9061348b46fc1bf7039fa5efcbcea1",
"text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.",
"title": ""
},
{
"docid": "e3c77ede3d63708b138b6aa240fea57b",
"text": "We numerically investigated 3-dimensional (3D) sub-wavelength structured metallic nanohole films with various thicknesses using wavelength interrogation technique. The reflectivity and full-width at half maximum (FWHM) of the localized surface plasmon resonance (LSPR) spectra was calculated using finite-difference time domain (FDTD) method. Results showed that a 100nm-thick silver nanohole gave higher reflectivity of 92% at the resonance wavelength of 644nm. Silver, copper and aluminum structured thin films showed only a small difference in the reflectivity spectra for various metallic film thicknesses whereas gold thin films showed a reflectivity decrease as the film thickness was increased. However, all four types of metallic nanohole films exhibited increment in FWHM (broader curve) and the resonance wavelength was red-shifted as the film thicknesses were decreased.",
"title": ""
},
{
"docid": "c4282486dad6f0fef06964bd3fa45272",
"text": "In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Specically, the toolkit provides a unied data preparation module for dierent text matching problems, a exible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interactionfocused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.",
"title": ""
},
{
"docid": "a1fe9d395292fb3e4283f320022cacc7",
"text": "Hepatitis A is a common disease in developing countries and Albania has a high prevalence of this disease associated to young age. In spite of the occurrence of a unique serotype there are different genotypes classified from I to VII. Genotype characterisation of HAV isolates circulating in Albania has been undertaken, as well as the study of the occurrence of antigenic variants in the proteins VP3 and VP1. To evaluate the genetic variability of the Albanian hepatitis A virus (HAV) isolates, samples were collected from 12 different cities, and the VP1/2A junction amplified and sequenced. These sequences were aligned and a phylogenetic analysis performed. Additionally, the amino half sequence of the protein VP3 and the complete sequence of the VP1 was determined. Anti-HAV IgM were present in 66.2% of all the sera. Fifty HAV isolates were amplified and the analysis revealed that all the isolates were sub-genotype IA with only limited mutations. When the deduced amino acid sequences were obtained, the alignment showed only two amino acids substitutions at positions 22 and 34 of the 2A protein. A higher genomic stability of the VP1/2A region, in contrast with what occurs in other parts of the world could be observed, indicating high endemicity of HAV in Albania. In addition, two potential antigenic variants were detected. The first at position 46 of VP3 in seven isolates and the second at position 23 of VP1 in six isolates.",
"title": ""
},
{
"docid": "19d6ad18011815602854685211847c52",
"text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.",
"title": ""
},
{
"docid": "2496fa63868717ce2ed56c1777c4b0ed",
"text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡",
"title": ""
},
{
"docid": "338e037f4ec9f6215f48843b9d03f103",
"text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).",
"title": ""
},
{
"docid": "fe08f3e1dc4fe2d71059b483c8532e88",
"text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.",
"title": ""
},
{
"docid": "c7de7b159579b5c8668f2a072577322c",
"text": "This paper presents a method for effectively using unlabeled sequential data in the learning of hidden Markov models (HMMs). With the conventional approach, class labels for unlabeled data are assigned deterministically by HMMs learned from labeled data. Such labeling often becomes unreliable when the number of labeled data is small. We propose an extended Baum-Welch (EBW) algorithm in which the labeling is undertaken probabilistically and iteratively so that the labeled and unlabeled data likelihoods are improved. Unlike the conventional approach, the EBW algorithm guarantees convergence to a local maximum of the likelihood. Experimental results on gesture data and speech data show that when labeled training data are scarce, by using unlabeled data, the EBW algorithm improves the classification performance of HMMs more robustly than the conventional naive labeling (NL) approach. keywords Unlabeled data, sequential data, hidden Markov models, extended Baum-Welch algorithm.",
"title": ""
},
{
"docid": "41d32df9d58f9c38f75010c87c0c3327",
"text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.",
"title": ""
},
{
"docid": "26439bd538c8f0b5d6fba3140e609aab",
"text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.",
"title": ""
},
{
"docid": "6d26012bd529735410477c9f389bbf73",
"text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.",
"title": ""
},
{
"docid": "20b28dd4a0717add4e032976a7946109",
"text": "In planning an s-curve speed profile for a computer numerical control (CNC) machine, centripetal acceleration and its derivative have to be considered. In a CNC machine, these quantities dictate how much voltage and current should be applied to servo motor windings. In this paper, the necessity of considering centripetal jerk in speed profile generation especially in the look-ahead mode is explained. It is demonstrated that the magnitude of centripetal jerk is proportional to the curvature derivative of the path known as \"sharpness\". It is also explained that a proper limited jerk motion is only possible when a G2-continuous machining path is planned. Then using a simplified mathematical representation of clothoids, a novel method for approximating a given path with a sequence of clothoid segments is proposed. Using this method, a semi-parallel G2-continuous path with adjustable deviation from the original shape for a sample machining contour is generated. Maximum permissible feed rate for the generated path is also calculated.",
"title": ""
},
{
"docid": "c58fc1a572d5120e14eb6e501a50b8aa",
"text": "475 Abstract— In this paper a dc-dc buck-boost converter is modeled and controlled using sliding mode technique. First the buck-boost converter is modeled and dynamic equations describing the converter are derived and sliding mode controller is designed. The robustness of the converter system is tested against step load changes and input voltage variations. Matlab/Simulink is used for the simulations. The simulation results are presented..",
"title": ""
},
{
"docid": "5f57fdeba1afdfb7dcbd8832f806bc48",
"text": "OBJECTIVES\nAdolescents spend increasingly more time on electronic devices, and sleep deficiency rising in adolescents constitutes a major public health concern. The aim of the present study was to investigate daytime screen use and use of electronic devices before bedtime in relation to sleep.\n\n\nDESIGN\nA large cross-sectional population-based survey study from 2012, the youth@hordaland study, in Hordaland County in Norway.\n\n\nSETTING\nCross-sectional general community-based study.\n\n\nPARTICIPANTS\n9846 adolescents from three age cohorts aged 16-19. The main independent variables were type and frequency of electronic devices at bedtime and hours of screen-time during leisure time.\n\n\nOUTCOMES\nSleep variables calculated based on self-report including bedtime, rise time, time in bed, sleep duration, sleep onset latency and wake after sleep onset.\n\n\nRESULTS\nAdolescents spent a large amount of time during the day and at bedtime using electronic devices. Daytime and bedtime use of electronic devices were both related to sleep measures, with an increased risk of short sleep duration, long sleep onset latency and increased sleep deficiency. A dose-response relationship emerged between sleep duration and use of electronic devices, exemplified by the association between PC use and risk of less than 5 h of sleep (OR=2.70, 95% CI 2.14 to 3.39), and comparable lower odds for 7-8 h of sleep (OR=1.64, 95% CI 1.38 to 1.96).\n\n\nCONCLUSIONS\nUse of electronic devices is frequent in adolescence, during the day as well as at bedtime. The results demonstrate a negative relation between use of technology and sleep, suggesting that recommendations on healthy media use could include restrictions on electronic devices.",
"title": ""
},
{
"docid": "7299cec968f909f2bfce5182190d9fb2",
"text": "Identifying and correcting syntax errors is a challenge all novice programmers confront. As educators, the more we understand about the nature of these errors and how students respond to them, the more effective our teaching can be. It is well known that just a few types of errors are far more frequently encountered by students learning to program than most. In this paper, we examine how long students spend resolving the most common syntax errors, and discover that certain types of errors are not solved any more quickly by the higher ability students. Moreover, we note that these errors consume a large amount of student time, suggesting that targeted teaching interventions may yield a significant payoff in terms of increasing student productivity.",
"title": ""
},
{
"docid": "0508773a4c1a753918f21b8b97848a62",
"text": "In this paper, the time dependent dielectric breakdown behavior is investigated for production type crystalline ZrO2-based thin films under dc and ac stress. Constant voltage stress measurements over six decades in time show that the voltage acceleration of time-to-breakdown follows the conventional exponential law. The effects of ac stress on time-to-breakdown are studied in detail by changing the experimental parameters including stress voltage, base voltage, and frequency. In general, ac stressing gives rise to a gain in lifetime, which may result from less overall charge trapping. This trap dynamic was investigated by dielectric absorption measurements. Overall, the typical DRAM refresh of the capacitor leads to the most critical reliability concern.",
"title": ""
},
{
"docid": "0a6a3e82b701bfbdbb73a9e8573fc94a",
"text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.",
"title": ""
},
{
"docid": "a4cfe72cae5bdaed110299d652e60a6f",
"text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.",
"title": ""
},
{
"docid": "ad8b5a47ede41c39a3ac5fa462dc8815",
"text": "Because traditional electric power distribution systems have been designed assuming the primary substation is the sole source of power and short-circuit capacity, DR interconnection results in operating situations that do not occur in a conventional system. This paper discusses several system issues which may be encountered as DR penetrates into distribution systems. The voltage issues covered are the DR impact on system voltage, interaction of DR and capacitor operations, and interaction of DR and voltage regulator and LTC operations. Protection issues include fuse coordination, feeding faults after utility protection opens, impact of DR on interrupting rating of devices, faults on adjacent feeders, fault detection, ground source impacts, single phase interruption on three phase line, recloser coordination and conductor burndown. Loss of power grid is also discussed, including vulnerability and overvoltages due to islanding and coordination with reclosing. Also covered separately are system restoration and network issues.",
"title": ""
}
] |
scidocsrr
|
3413d476d50b59d2eea2a236c19f9c37
|
User-centric ultra-dense networks for 5G: challenges, methodologies, and directions
|
[
{
"docid": "e066761ecb7d8b7468756fb4be6b8fcb",
"text": "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.",
"title": ""
},
{
"docid": "cdef5f6a50c1f427e8f37be3c6ebbccf",
"text": "In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented.",
"title": ""
}
] |
[
{
"docid": "2c15bef67e6bdbfaf66e1164f8dddf52",
"text": "Social behavior is ordinarily treated as being under conscious (if not always thoughtful) control. However, considerable evidence now supports the view that social behavior often operates in an implicit or unconscious fashion. The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor. The present conclusion--that attitudes, self-esteem, and stereotypes have important implicit modes of operation--extends both the construct validity and predictive usefulness of these major theoretical constructs of social psychology. Methodologically, this review calls for increased use of indirect measures--which are imperative in studies of implicit cognition. The theorized ordinariness of implicit stereotyping is consistent with recent findings of discrimination by people who explicitly disavow prejudice. The finding that implicit cognitive effects are often reduced by focusing judges' attention on their judgment task provides a basis for evaluating applications (such as affirmative action) aimed at reducing such unintended discrimination.",
"title": ""
},
{
"docid": "eab311504e78caa71bcd56043cfc6570",
"text": "In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC.",
"title": ""
},
{
"docid": "4e2bfd87acf1287f36694634a6111b3f",
"text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.",
"title": ""
},
{
"docid": "29c52509c5235db62e2a586dbaf07ff6",
"text": "This paper studies the area of fraud detection in the light of existing intrusion detection research. Fraud detection and intrusion detection have traditionally been two almost completely separate research areas. Fraud detection has long been used by such businesses as telecom companies, banks and insurance companies. Intrusion detection has recently become a popular means to protect computer systems and computer based services. Many of the services offered by businesses using fraud detection are now computer based, thus opening new ways of committing fraud not covered by traditional fraud detection systems. Merging fraud detection with intrusion detection may be a solution for protecting new computer based services. An IP based telecom service is used as an example to illustrate these new problems and the use of a suggested fraud model.",
"title": ""
},
{
"docid": "085f6b8b53bd2e7afb5558e5b0b0356a",
"text": "Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the application’s user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human computer interaction.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "9735cecc4d8419475c72c4bd52ab556e",
"text": "Information diffusion and virus propagation are fundamental processes talking place in networks. While it is often possible to directly observe when nodes become infected, observing individual transmissions (i.e., who infects whom or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and in practice gives provably near-optimal performance. We demonstrate the effectiveness of our approach by tracing information cascades in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.",
"title": ""
},
{
"docid": "87fefee3cb35d188ad942ee7c8fad95f",
"text": "Financial frictions are a central element of most of the models that the literature on emerging markets crises has proposed for explaining the ‘Sudden Stop’ phenomenon. To date, few studies have aimed to examine the quantitative implications of these models and to integrate them with an equilibrium business cycle framework for emerging economies. This paper surveys these studies viewing them as ability-to-pay and willingness-to-pay variations of a framework that adds occasionally binding borrowing constraints to the small open economy real-business-cycle model. A common feature of the different models is that agents factor in the risk of future Sudden Stops in their optimal plans, so that equilibrium allocations and prices are distorted even when credit constraints do not bind. Sudden Stops are a property of the unique, flexible-price competitive equilibrium of these models that occurs in a particular region of the state space in which negative shocks make borrowing constraints bind. The resulting nonlinear effects imply that solving the models requires non-linear numerical methods, which are described in the survey. The results show that the models can yield relatively infrequent Sudden Stops with large current account reversals and deep recessions nested within smoother business cycles. Still, research in this area is at an early stage and this survey aims to stimulate further work. Cristina Arellano Enrique G. Mendoza Department of Economics Department of Economics Social Sciences Building University of Maryland Duke University College Park, MD 20742 Durham, NC 27708-0097 and NBER mendozae@econ.duke.edu",
"title": ""
},
{
"docid": "85d1d340f41d2da04d1dea7d70801df1",
"text": "In this Part II of this paper we first refine the analysis of error-free vector transformations presented in Part I. Based on that we present an algorithm for calculating the rounded-to-nearest result of s := ∑ pi for a given vector of floatingpoint numbers pi, as well as algorithms for directed rounding. A special algorithm for computing the sign of s is given, also working for huge dimensions. Assume a floating-point working precision with relative rounding error unit eps. We define and investigate a K-fold faithful rounding of a real number r. Basically the result is stored in a vector Resν of K non-overlapping floating-point numbers such that ∑ Resν approximates r with relative accuracy epsK , and replacing ResK by its floating-point neighbors in ∑ Resν forms a lower and upper bound for r. For a given vector of floating-point numbers with exact sum s, we present an algorithm for calculating a K-fold faithful rounding of s using solely the working precision. Furthermore, an algorithm for calculating a faithfully rounded result of the sum of a vector of huge dimension is presented. Our algorithms are fast in terms of measured computing time because they allow good instruction-level parallelism, they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require some extra precision: The only operations used are standard floating-point addition, subtraction and multiplication in one working precision, for example double precision. Certain constants used in the algorithms are proved to be optimal.",
"title": ""
},
{
"docid": "8f2c7770fdcd9bfe6a7e9c6e10569fc7",
"text": "The purpose of this paper is to explore the importance of Information Technology (IT) Governance models for public organizations and presenting an IT Governance model that can be adopted by both practitioners and researchers. A review of the literature in IT Governance has been initiated to shape the intended theoretical background of this study. The systematic literature review formalizes a richer context for the IT Governance concept. An empirical survey, using a questionnaire based on COBIT 4.1 maturity model used to investigate IT Governance practice in multiple case studies from Kingdom of Bahrain. This method enabled the researcher to gain insights to evaluate IT Governance practices. The results of this research will enable public sector organizations to adopt an IT Governance model in a simple and dynamic manner. The model provides a basic structure of a concept; for instance, this allows organizations to gain a better perspective on IT Governance processes and provides a clear focus for decision-making attention. IT Governance model also forms as a basis for further research in IT Governance adoption models and bridges the gap between conceptual frameworks, real life and functioning governance.",
"title": ""
},
{
"docid": "40f452c48367c51cfe6bd95a6b8f9548",
"text": "This paper presents a new single-phase, Hybrid Switched Reluctance (HSR) motor for low-cost, low-power, pump or fan drive systems. Its single-phase configuration allows use of a simple converter to reduce the system cost. Cheap ferrite magnets are used and arranged in a special flux concentration manner to increase effectively the torque density and efficiency of this machine. The efficiency of this machine is comparable to the efficiency of a traditional permanent magnet machine in the similar power range. The cogging torque, due to the existence of the permanent magnetic field, is beneficially used to reduce the torque ripple and enable self-starting of the machine. The starting torque of this machine is significantly improved by a slight extension of the stator pole-arc. A prototype machine and a complete drive system has been manufactured and tested. Results are given in this paper.",
"title": ""
},
{
"docid": "5a4959ef609e2ed64018aed292b7f27f",
"text": "With thousands of alerts identified by IDSs every day, the process of distinguishing which alerts are important (i.e., true positives) and which are is irrelevant (i.e., false positives) is become more complicated. The security administrator must analyze each single alert either a true of false alert. This paper proposes an alert prioritization model, which is based on risk assessment. The model uses indicators, such as priority, reliability, asset value, as decision factors to calculate alert's risk. The objective is to determine the impact of certain alerts generated by IDS on the security status of an information system, also improve the detection of intrusions using snort by classifying the most critical alerts by their levels of risk, thus, only the alerts that presents a real threat will be displayed to the security administrator, so, we reduce the number of false positives, also we minimize the analysis time of the alerts. The model was evaluated using KDD Cup 99 Dataset as test environment and a pattern matching algorithm.",
"title": ""
},
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
},
{
"docid": "a637d37cb1c4a937b64494903b33193d",
"text": "The multienzyme complexes, pyruvate dehydrogenase and alpha-ketoglutarate dehydrogenase, involved in the central metabolism of Escherichia coli consist of multiple copies of three different enzymes, E1, E2 and E3, that cooperate to channel substrate intermediates between their active sites. The E2 components form the core of the complex, while a mixture of E1 and E3 components binds to the core. We present a random steady-state model to describe catalysis by such multienzyme complexes. At a fast time scale, the model describes the enzyme catalytic mechanisms of substrate channeling at a steady state, by polynomially approximating the analytic solution of a biochemical master equation. At a slower time scale, the structural organization of the different enzymes in the complex and their random binding/unbinding to the core is modeled using methods from equilibrium statistical mechanics. Biologically, the model describes the optimization of catalytic activity by substrate sharing over the entire enzyme complex. The resulting enzymatic models illustrate the random steady state (RSS) for modeling multienzyme complexes in metabolic pathways.",
"title": ""
},
{
"docid": "2487c225879ab88c0d56ab9c91793346",
"text": "The purpose of this article is to propose a Sustainable Balanced Scorecard model for Chilean wineries (SBSC). This system, which is based on the Balanced Scorecard (BSC), one of the most widespread management systems nowadays in the world, Rigby, and Bilodeau (2011), will allow the wine companies to manage the business in two dimensions: sustainability, which will measure how sustainable is the business and the temporal dimension, linking the measurement of strategic performance with the day to day. To achieve the target previously raised, a research on sustainability will be developed, along with strategic performance measurement systems and a diagnosis of the Chilean wine industry, based on in-depth interviews to 42 companies in the central zone of Chile. On the basis of the assessment of the wine industry carried out, it is concluded that the bases for a future design and implementation of the SBSC system are in place since it was found that 83% of the vineyards have a strategic plan formally in place, which corresponds to the input of the proposed system.",
"title": ""
},
{
"docid": "9eb4a4519e9a1e3a7547520a23adcaf2",
"text": "We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games – surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.",
"title": ""
},
{
"docid": "fdc580124be4f1398976d4161791bf8a",
"text": "Child abuse is a problem that affects over six million children in the United States each year. Child neglect accounts for 78 % of those cases. Despite this, the issue of child neglect is still not well understood, partially because child neglect does not have a consistent, universally accepted definition. Some researchers consider child neglect and child abuse to be one in the same, while other researchers consider them to be conceptually different. Factors that make child neglect difficult to define include: (1) Cultural differences; motives must be taken into account because parents may believe they are acting in the child’s best interests based on cultural beliefs (2) the fact that the effect of child abuse is not always immediately visible; the effects of emotional neglect specifically may not be apparent until later in the child’s development, and (3) the large spectrum of actions that fall under the category of child abuse. Some of the risk factors for increased child neglect and maltreatment have been identified. These risk factors include socioeconomic status, education level, family composition, and the presence of dysfunction family characteristics. Studies have found that children from poorer families and children of less educated parents are more likely to sustain fatal unintentional injuries than children of wealthier, better educated parents. Studies have also found that children living with adults unrelated to them are at increased risk for unintentional injuries and maltreatment. Dysfunctional family characteristics may even be more indicative of child neglect. Parental alcohol or drug abuse, parental personal history of neglect, and parental stress greatly increase the odds of neglect. Parental depression doubles the odds of child neglect. However, more research needs to be done to better understand these risk factors and to identify others. Having a clearer understanding of the risk factors could lead to prevention and treatment, as it would allow for health care personnel to screen for high-risk children and intervene before it is too late. Screening could also be done in the schools and organized after school activities. Parenting classes have been shown to be an effective intervention strategy by decreasing parental stress and potential for abuse, but there has been limited research done on this approach. Parenting classes can be part of the corrective actions for parents found to be neglectful or abusive, but parenting classes may also be useful as a preventative measure, being taught in schools or readily available in higher-risk communities. More research has to be done to better define child abuse and neglect so that it can be effectively addressed and treated.",
"title": ""
},
{
"docid": "4b432638ecceac3d1948fb2b2e9be49b",
"text": "Software process refers to the set of tools, methods, and practices used to produce a software artifact. The objective of a software process management model is to produce software artifacts according to plans while simultaneously improving the organization's capability to produce better artifacts. The SEI's Capability Maturity Model (CMM) is a software process management model; it assists organizations to provide the infrastructure for achieving a disciplined and mature software process. There is a growing concern that the CMM is not applicable to small firms because it requires a huge investment. In fact, detailed studies of the CMM show that its applications may cost well over $100,000. This article attempts to address the above concern by studying the feasibility of a scaled-down version of the CMM for use in small software firms. The logic for a scaled-down CMM is that the same quantitative quality control principles that work for larger projects can be scaled-down and adopted for smaller ones. Both the CMM and the Personal Software Process (PSP) are briefly described and are used as basis.",
"title": ""
},
{
"docid": "23d560ca3bb6f2d7d9b615b5ad3224d2",
"text": "The Pebbles project is creating applications to connmt multiple Personal DigiM Assistants &DAs) to a main computer such as a PC We are cmenfly using 3Com Pd@Ilots b-use they are popdar and widespread. We created the ‘Remote Comrnandefl application to dow users to take turns sending input from their PahnPiiots to the PC as if they were using the PCS mouse and keyboard. ‘.PebblesDraw” is a shared whiteboard application we btit that allows dl of tie users to send input simtdtaneously while sharing the same PC display. We are investigating the use of these applications in various contexts, such as colocated mmtings. Keywor& Personal Digiti Assistants @DAs), PH11oc Single Display Groupware, Pebbles, AmuleL",
"title": ""
},
{
"docid": "6927647b1e1f6bf9bcf65db50e9f8d6e",
"text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.",
"title": ""
}
] |
scidocsrr
|
814c1782754e9015ed744f83d481626f
|
Japan's 2014 General Election: Political Bots, Right-Wing Internet Activism, and Prime Minister Shinzō Abe's Hidden Nationalist Agenda
|
[
{
"docid": "940df82b743d99cb3f6dff903920482f",
"text": "Online publishing, social networks, and web search have dramatically lowered the costs to produce, distribute, and discover news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry they increase ideological segregation. We address the issue by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that social networks and search engines increase the mean ideological distance between individuals. However, somewhat counterintuitively, we also find these same channels increase an individual’s exposure to material from his or her less preferred side of the political spectrum. Finally, we show that the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences—both positive and negative—of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the e↵ects are relatively modest. WORD COUNT: 5,762 words",
"title": ""
},
{
"docid": "0bc29304bd058053d6d0440f60f884d5",
"text": "YouTube is one of the more powerful tools for self-learning and entertaining globally. Uploading and sharing on YouTube have increased recently as these are possible via a simple click. Moreover, some countries, including Saudi Arabia, use this technology more than others. While there are many Saudi channels and videos for all age groups, there are limited channels for people with disabilities such as Deaf and Hard of Hearing people (DHH). The utilization of YouTube among DHH people has not reached its full potential. To investigate this phenomenon, we conducted an empirical research study to uncover factors influencing DHH people’s motivations, perceptions and adoption of YouTube, based on the Technology Acceptance Model (TAM). The results showed that DHH people pinpoint some useful functions in YouTube, such as the captions in English and the translation in Arabic. However, Arab DHH people are not sufficiently motivated to watch YouTube due to the fact that the YouTube time-span is fast and DHH personnel prefer greater time to allow them to read and understand the contents. Hence, DHH people tend to avoid sharing YouTube videos among their contacts.",
"title": ""
}
] |
[
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "eff3b5c790b62021d4615f4a1708d707",
"text": "Web services are becoming business-critical components that must provide a non-vulnerable interface to the client applications. However, previous research and practice show that many web services are deployed with critical vulnerabilities. SQL Injection vulnerabilities are particularly relevant, as web services frequently access a relational database using SQL commands. Penetration testing and static code analysis are two well-know techniques often used for the detection of security vulnerabilities. In this work we compare how effective these two techniques are on the detection of SQL Injection vulnerabilities in web services code. To understand the strengths and limitations of these techniques, we used several commercial and open source tools to detect vulnerabilities in a set of vulnerable services. Results suggest that, in general, static code analyzers are able to detect more SQL Injection vulnerabilities than penetration testing tools. Another key observation is that tools implementing the same detection approach frequently detect different vulnerabilities. Finally, many tools provide a low coverage and a high false positives rate, making them a bad option for programmers.",
"title": ""
},
{
"docid": "ddc56e9f2cbe9c086089870ccec7e510",
"text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.",
"title": ""
},
{
"docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d089515aa3325010616010d9017f158e",
"text": "We report a receiver for four-level pulse-amplitude modulated (PAM-4) encoded data signals, which was measured to receive data at 22 Gb/s with a bit error rate (BER) <10/sup -12/ at a maximum frequency deviation of 350 ppm and a 2/sup 7/-1 PRBS pattern. We propose a bit-sliced architecture for the data path, and a novel voltage shifting amplifier to introduce a programmable offset to the differential data signal. We present a novel method to characterize sampling latches and include them in the data path. A current-mode logic (CML) biasing scheme using programmable matched resistors limits the effect of process variations. The receiver also features a programmable signal termination, an analog equalizer and offset compensation for each sampling latch. The measured current consumption is 207 mA from a 1.1-V supply, and the active chip area is 0.12 mm/sup 2/.",
"title": ""
},
{
"docid": "5efe4e98fd21e83033669aaf58857bf6",
"text": "Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey 1) template matching-based object detection methods, 2) knowledge-based object detection methods, 3) object-based image analysis (OBIA)-based object detection methods, 4) machine learning-based object detection methods, and 5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.",
"title": ""
},
{
"docid": "646a1a07019d0f2965051baebcfe62c5",
"text": "We present a computing model based on the DNA strand displacement technique, which performs Bayesian inference. The model will take single-stranded DNA as input data, that represents the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease affecting a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single-stranded DNA species whose ratio represents the application of Bayes’ law: the conditional probability of the disease given the signal. The models presented in this paper can have the potential to enable the application of probabilistic reasoning in genetic diagnosis in vitro.",
"title": ""
},
{
"docid": "6ccb58b003394200846205914989b88f",
"text": "This paper describes a new, large scale discourse-level annotation project – the Penn Discourse TreeBank (PDTB). We present an approach to annotating a level of discourse structure that is based on identifying discourse connectives and their arguments. The PDTB is being built directly on top of the Penn TreeBank and Propbank, thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms. We provide a detailed preliminary analysis of inter-annotator agreement – both the level of agreement and the types of inter-annotator variation.",
"title": ""
},
{
"docid": "11d8d62d92cb5cda76f817530132bd3e",
"text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from falls. A micro inertial measurement unit (muIMU), based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A recognition algorithm is used for real-time fall determination. With the algorithm, a microcontroller integrated with the muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to have fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa. In addition, we present our progress on using support vector machine (SVM) training together with the muIMU to better distinguish falling and normal motions. Experimental results show that selected eigenvector sets generated from 200 experimental data sets can be accurately separated into falling and other motions",
"title": ""
},
{
"docid": "b4fddc33bdf1afc1bc3e867d8d560bf1",
"text": "Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers. In this work, we propose a hybrid neural model for deep questionanswering task from history examinations. Our model employs a cooperative gated neural network to retrieve answers with the assistance of extra labels given by a neural turing machine labeler. Empirical study shows that the labeler works well with only a small training dataset and the gated mechanism is good at fetching the semantic representation of lengthy answers. Experiments on question answering demonstrate the proposed model obtains substantial performance gains over various neural model baselines in terms of multiple evaluation metrics.",
"title": ""
},
{
"docid": "38f85a10e8f8b815974f5e42386b1fa3",
"text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.",
"title": ""
},
{
"docid": "65946b75e84eaa86caf909d4c721a190",
"text": "The Park Geun-hye Administration of Korea (2013–2017) aims to increase the level of transparency and citizen trust in government through the Government 3.0 initiative. This new initiative for public sector innovation encourages citizen-government collaboration and collective intelligence, thereby improving the quality of policy-making and implementation and solving public problems in a new way. However, the national initiative that identifies collective intelligence and citizen-government collaboration alike fails to understand what the wisdom of crowds genuinely means. Collective intelligence is not a magic bullet to solve public problems, which are called “wicked problems”. Collective deliberation over public issues often brings pain and patience, rather than fun and joy. It is not so easy that the public finds the best solution for soothing public problems through collective deliberation. The Government 3.0 initiative does not pay much attention to difficulties in gathering scattered wisdom, but rather highlights uncertain opportunities created by collective interactions and communications. This study deeply discusses the weaknesses in the logic of, and approach to, collective intelligence underlying the Government 3.0 initiative in Korea and the overall influence of the national initiative on participatory democracy.",
"title": ""
},
{
"docid": "67ca7b4e38b545cd34ef79f305655a45",
"text": "Failsafe performance is clarified for electric vehicles (EVs) with the drive structure driven by front and rear wheels independently, i.e., front and rear wheel independent drive type (FRID) EV. A simulator based on the four-wheel vehicle model, which can be applied to various types of drive systems like four in-wheel motor-drive-type EVs, is used for the clarification. Yaw rate and skid angle, which are related to drivability and steerability of vehicles and which further influence the safety of vehicles during runs, are analyzed under the condition that one of the motor drive systems fails while cornering on wet roads. In comparison with the four in-wheel motor-drive-type EVs, it is confirmed that the EVs with the structure focused in this paper have little change of the yaw rate and that hardly any dangerous phenomena appear, which would cause an increase in the skid angle of vehicles even if the front or rear wheel drive systems fail when running on wet roads with low friction coefficient. Moreover, the failsafe drive performance of the FRID EVs with the aforementioned structure is verified through experiments using a prototype EV.",
"title": ""
},
{
"docid": "d0a9e27e2a8e4f6c2f40355bdc7a0a97",
"text": "The abilities to identify with others and to distinguish between self and other play a pivotal role in intersubjective transactions. Here, we marshall evidence from developmental science, social psychology and neuroscience (including clinical neuropsychology) that support the view of a common representation network (both at the computational and neural levels) between self and other. However, sharedness does not mean identicality, otherwise representations of self and others would completely overlap, and lead to confusion. We argue that self-awareness and agency are integral components for navigating within these shared representations. We suggest that within this shared neural network the inferior parietal cortex and the prefrontal cortex in the right hemisphere play a special role in interpersonal awareness.",
"title": ""
},
{
"docid": "6b19185466fb134b6bfb09b04b9e4b15",
"text": "BACKGROUND\nThe increasing concern about the adverse effects of overuse of smartphones during clinical practicum implies the need for policies restricting smartphone use while attending to patients. It is important to educate health personnel about the potential risks that can arise from the associated distraction.\n\n\nOBJECTIVE\nThe aim of this study was to analyze the relationship between the level of nomophobia and the distraction associated with smartphone use among nursing students during their clinical practicum.\n\n\nMETHODS\nA cross-sectional study was carried out on 304 nursing students. The nomophobia questionnaire (NMP-Q) and a questionnaire about smartphone use, the distraction associated with it, and opinions about phone restriction policies in hospitals were used.\n\n\nRESULTS\nA positive correlation between the use of smartphones and the total score of nomophobia was found. In the same way, there was a positive correlation between opinion about smartphone restriction polices with each of the dimensions of nomophobia and the total score of the questionnaire.\n\n\nCONCLUSIONS\nNursing students who show high levels of nomophobia also regularly use their smartphones during their clinical practicum, although they also believe that the implementation of policies restricting smartphone use while working is necessary.",
"title": ""
},
{
"docid": "1abef5c69eab484db382cdc2a2a1a73f",
"text": "Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.",
"title": ""
},
{
"docid": "2fb484ef6d394e27a3157774048c3917",
"text": "As the demand of high quality service in next generation wireless communication systems increases, a high performance of data transmission requires an increase of spectrum efficiency and an improvement of error performance in wireless communication systems. One of the promising approaches to 4G is adaptive OFDM (AOFDM). In AOFDM, adaptive transmission scheme is employed according to channel fading condition with OFDM to improve the performance Adaptive modulation system is superior to fixed modulation system since it changes modulation scheme according to channel fading condition. Performance of adaptive modulation system depends on decision making logic. Adaptive modulation systems using hardware decision making circuits are inefficient to decide or change modulation scheme according to given conditions. Using fuzzy logic in decision making interface makes the system more efficient. In this paper, we propose a OFDM system with adaptive modulation using fuzzy logic interface to improve system capacity with maintaining good error performance. The results of computer simulation show the improvement of system capacity in Rayleigh fading channel.",
"title": ""
},
{
"docid": "35d9cfbb5f0b2623ce83973ae3235c74",
"text": "Text entry has been a bottleneck of nontraditional computing devices. One of the promising methods is the virtual keyboard for touch screens. Correcting previous estimates on virtual keyboard efficiency in the literature, we estimated the potential performance of the existing QWERTY, FITALY, and OPTI designs of virtual keyboards to be in the neighborhood of 28, 36, and 38 words per minute (wpm), respectively. This article presents 2 quantitative design techniques to search for virtual keyboard layouts. The first technique simulated the dynamics of a keyboard with digraph springs between keys, which produced a Hooke keyboard with 41.6 wpm movement efficiency. The second technique used a Metropolis random walk algorithm guided by a “Fitts-digraph energy” objective function that quantifies the movement efficiency of a virtual keyboard. This method produced various Metropolis keyboards with different HUMAN-COMPUTER INTERACTION, 2002, Volume 17, pp. 89–XXX Copyright © 2002, Lawrence Erlbaum Associates, Inc. Shumin Zhai is a human–computer interaction researcher with an interest in inventing and analyzing interaction methods and devices based on human performance insights and experimentation; he is a Research Staff Member in the User Sciences and Experience Research Department of the IBM Almaden Research Center. Michael Hunter is a graduate student of Computer Science at Brigham Young University; he is interested in designing graphical and haptic user interfaces. Barton A. Smith is an experimental scientist with an interest in machines, people, and society; he is manager of the Human Interface Research Group at the IBM Almaden Research Center. shapes and structures with approximately 42.5 wpm movement efficiency, which was 50% higher than QWERTY and 10% higher than OPTI. With a small reduction (41.16 wpm) of movement efficiency, we introduced 2 more design objectives that produced the ATOMIK layout. One was alphabetical tuning that placed the keys with a tendency from A to Z so a novice user could more easily locate the keys. The other was word connectivity enhancement so the most frequent words were easier to find, remember, and type.",
"title": ""
},
{
"docid": "39ed08e9a08b7d71a4c177afe8f0056a",
"text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
20186e8baf3f94aa23c08db0803db717
|
Snake-Based Segmentation of Teeth from Virtual Dental Casts
|
[
{
"docid": "b82adc75ccdf7bd437f969d226bc29a1",
"text": "Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to concave boundaries, however, have limited their utility. This paper develops a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. The resultant field has a large capture range and forces active contours into concave regions. Examples on simulated images and one real image are presented.",
"title": ""
},
{
"docid": "78fc46165449f94e75e70a2654abf518",
"text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.",
"title": ""
}
] |
[
{
"docid": "ef79fbd26ad0bdc951edcdef8bcffdbf",
"text": "Question answering (Q&A) sites, where communities of volunteers answer questions, may provide faster, cheaper, and better services than traditional institutions. However, like other Web 2.0 platforms, user-created content raises concerns about information quality. At the same time, Q&A sites may provide answers of different quality because they have different communities and technological platforms. This paper compares answer quality on four Q&A sites: Askville, WikiAnswers, Wikipedia Reference Desk, and Yahoo! Answers. Findings indicate that: 1) the use of similar collaborative processes on these sites results in a wide range of outcomes. Significant differences in answer accuracy, completeness, and verifiability were found; 2) answer multiplication does not always result in better information. Answer multiplication yields more complete and verifiable answers but does not result in higher accuracy levels; and 3) a Q&A site’s popularity does not correlate with its answer quality, on all three measures.",
"title": ""
},
{
"docid": "6d825778d5d2cb935aab35c60482a267",
"text": "As the workforce ages rapidly in industrialized countries, a phenomenon known as the graying of the workforce, new challenges arise for firms as they have to juggle this dramatic demographical change (Trend 1) in conjunction with the proliferation of increasingly modern information and communication technologies (ICTs) (Trend 2). Although these two important workplace trends are pervasive, their interdependencies have remained largely unexplored. While Information Systems (IS) research has established the pertinence of age to IS phenomena from an empirical perspective, it has tended to model the concept merely as a control variable with limited understanding of its conceptual nature. In fact, even the few IS studies that used the concept of age as a substantive variable have mostly relied on stereotypical accounts alone to justify their age-related hypotheses. Further, most of these studies have examined the role of age in the same phenomenon (i.e., initial adoption of ICTs), implying a marked lack of diversity with respect to the phenomena under investigation. Overall, IS research has yielded only limited insight into the role of age in phenomena involving ICTs. In this essay, we argue for the importance of studying agerelated impacts more carefully and across various IS phenomena, and we enable such research by providing a research agenda that IS scholars can use. In doing so, we hope that future research will further both our empirical and conceptual understanding of the managerial challenges arising from the interplay of a graying workforce and rapidly evolving ICTs. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "69102c54448921bfbc63c007cc927b8d",
"text": "Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competitive with HER while substantially improving sampling efficiency in terms of reward computation.",
"title": ""
},
{
"docid": "b33b10f3b6720b1bec3a030f236ac16c",
"text": "In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-2010 word sense induction and disambiguation task, on which it reaches stateof-the-art results.",
"title": ""
},
{
"docid": "875548b7dc303bef8efa8284216e010d",
"text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.",
"title": ""
},
{
"docid": "26f76aa41a64622ee8f0eaaed2aac529",
"text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.",
"title": ""
},
{
"docid": "274485dd39c0727c99fcc0a07d434b25",
"text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "d01692a4ee83531badacea6658b74d8f",
"text": "Question Answering (QA) research for factoid questions has recently achieved great success. Presently, QA systems developed for European, Middle Eastern and Asian languages are capable of providing answers with reasonable accuracy. However, Bengali being among themost spoken languages in theworld, no factoid question answering system is available for Bengali till date. This paper describes the first attempt on building a factoid question answering system for Bengali language. The challenges in developing a question answering system for Bengali have been discussed. Extraction and ranking of relevant sentences have also been proposed. Also extraction strategy of the ranked answers from the relevant sentences are suggested for Bengali question answering system.",
"title": ""
},
{
"docid": "32e2c444bfbe7c85ea600c2b91bf2370",
"text": "The consumption of caffeine (an adenosine receptor antagonist) correlates inversely with depression and memory deterioration, and adenosine A2A receptor (A2AR) antagonists emerge as candidate therapeutic targets because they control aberrant synaptic plasticity and afford neuroprotection. Therefore we tested the ability of A2AR to control the behavioral, electrophysiological, and neurochemical modifications caused by chronic unpredictable stress (CUS), which alters hippocampal circuits, dampens mood and memory performance, and enhances susceptibility to depression. CUS for 3 wk in adult mice induced anxiogenic and helpless-like behavior and decreased memory performance. These behavioral changes were accompanied by synaptic alterations, typified by a decrease in synaptic plasticity and a reduced density of synaptic proteins (synaptosomal-associated protein 25, syntaxin, and vesicular glutamate transporter type 1), together with an increased density of A2AR in glutamatergic terminals in the hippocampus. Except for anxiety, for which results were mixed, CUS-induced behavioral and synaptic alterations were prevented by (i) caffeine (1 g/L in the drinking water, starting 3 wk before and continued throughout CUS); (ii) the selective A2AR antagonist KW6002 (3 mg/kg, p.o.); (iii) global A2AR deletion; and (iv) selective A2AR deletion in forebrain neurons. Notably, A2AR blockade was not only prophylactic but also therapeutically efficacious, because a 3-wk treatment with the A2AR antagonist SCH58261 (0.1 mg/kg, i.p.) reversed the mood and synaptic dysfunction caused by CUS. These results herald a key role for synaptic A2AR in the control of chronic stress-induced modifications and suggest A2AR as candidate targets to alleviate the consequences of chronic stress on brain function.",
"title": ""
},
{
"docid": "45fb31643f4fd53b08c51818f284f2df",
"text": "This paper introduces a new type of fuzzy inference systems, denoted as dynamic evolving neural-fuzzy inference system (DENFIS), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning, and accommodate new input data, including new features, new classes, etc., through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment, the output of DENFIS is calculated through a fuzzy inference system based on -most activated fuzzy rules which are dynamically chosen from a fuzzy rule set. Two approaches are proposed: 1) dynamic creation of a first-order Takagi–Sugeno-type fuzzy rule set for a DENFIS online model; and 2) creation of a first-order Takagi–Sugeno-type fuzzy rule set, or an expanded high-order one, for a DENFIS offline model. A set of fuzzy rules can be inserted into DENFIS before or during its learning process. Fuzzy rules can also be extracted during or after the learning process. An evolving clustering method (ECM), which is employed in both online and offline DENFIS models, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some well-known, existing models.",
"title": ""
},
{
"docid": "f69f8b58e926a8a4573dd650ee29f80b",
"text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.",
"title": ""
},
{
"docid": "d7102755d7934532e1de73815e282f27",
"text": "We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude.",
"title": ""
},
{
"docid": "8010361144a7bd9fc336aba88f6e8683",
"text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.",
"title": ""
},
{
"docid": "d1c34dda56e06cdae9d23c2e1cec41d2",
"text": "The detection of loop closure is of essential importance in visual simultaneous localization and mapping systems. It can reduce the accumulating drift of localization algorithms if the loops are checked correctly. Traditional loop closure detection approaches take advantage of Bag-of-Words model, which clusters the feature descriptors as words and measures the similarity between the observations in the word space. However, the features are usually designed artificially and may not be suitable for data from new-coming sensors. In this paper a novel loop closure detection approach is proposed that learns features from raw data using deep neural networks instead of common visual features. We discuss the details of the method of training neural networks. Experiments on an open dataset are also demonstrated to evaluate the performance of the proposed method. It can be seen that the neural network is feasible to solve this problem.",
"title": ""
},
{
"docid": "ee37a743edd1b87d600dcf2d0050ca18",
"text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.",
"title": ""
},
{
"docid": "8d176debd26505d424dcbf8f5cfdb4d1",
"text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.",
"title": ""
},
{
"docid": "04b32423acd23c03188ca8bf208a24fd",
"text": "We extend the notion of memristive systems to capacitive and inductive elements, namely, capacitors and inductors whose properties depend on the state and history of the system. All these elements typically show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale, where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and are likely to find applications in neuromorphic devices to simulate learning, adaptive, and spontaneous behavior.",
"title": ""
},
{
"docid": "4df6678c57115f6179587cff1cc5f228",
"text": "Depth maps captured by Kinect-like cameras are lack of depth data in some areas and suffer from heavy noise. These defects have negative impacts on practical applications. In order to enhance the depth maps, this paper proposes a new inpainting algorithm that extends the original fast marching method (FMM) to reconstruct unknown regions. The extended FMM incorporates an aligned color image as the guidance for inpainting. An edge-preserving guided filter is further applied for noise reduction. To validate our algorithm and compare it with other existing methods, we perform experiments on both the Kinect data and the Middlebury dataset which, respectively, provide qualitative and quantitative results. The results show that our method is efficient and superior to others.",
"title": ""
},
{
"docid": "ec3246cab3c6d8720a5fee5351869b79",
"text": "We present the first study of Native Language Identification (NLI) applied to text written in languages other than English, using data from six languages. NLI is the task of predicting an author’s first language (L1) using only their writings in a second language (L2), with applications in Second Language Acquisition and forensic linguistics. Most research to date has focused on English but there is a need to apply NLI to other languages, not only to gauge its applicability but also to aid in teaching research for other emerging languages. With this goal, we identify six typologically very different sources of non-English L2 data and conduct six experiments using a set of commonly used features. Our first two experiments evaluate our features and corpora, showing that the features perform well and at similar rates across languages. The third experiment compares non-native and native control data, showing that they can be discerned with 95% accuracy. Our fourth experiment provides a cross-linguistic assessment of how the degree of syntactic data encoded in part-of-speech tags affects their efficiency as classification features, finding that most differences between L1 groups lie in the ordering of the most basic word categories. We also tackle two questions that have not previously been addressed for NLI. Other work in NLI has shown that ensembles of classifiers over feature types work well and in our final exper2 S. Malmasi and M. Dras iment we use such an oracle classifier to derive an upper limit for classification accuracy with our feature set. We also present an analysis examining feature diversity, aiming to estimate the degree of overlap and complementarity between our chosen features employing an association measure for binary data. Finally, we conclude with a general discussion and outline directions for future work.",
"title": ""
}
] |
scidocsrr
|
54706ff3e9726dc1bfa45766d3892b23
|
Anatomical evaluation of the modified posterolateral approach for posterolateral tibial plateau fracture
|
[
{
"docid": "ec69b95261fc19183a43c0e102f39016",
"text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.",
"title": ""
},
{
"docid": "ad8762ae878b7e731b11ab6d67f9867d",
"text": "We describe a posterolateral transfibular neck approach to the proximal tibia. This approach was developed as an alternative to the anterolateral approach to the tibial plateau for the treatment of two fracture subtypes: depressed and split depressed fractures in which the comminution and depression are located in the posterior half of the lateral tibial condyle. These fractures have proved particularly difficult to reduce and adequately internally fix through an anterior or anterolateral approach. The approach described in this article exposes the posterolateral aspect of the tibial plateau between the posterior margin of the iliotibial band and the posterior cruciate ligament. The approach allows lateral buttressing of the lateral tibial plateau and may be combined with a simultaneous posteromedial and/or anteromedial approach to the tibial plateau. Critically, the proximal tibial soft tissue envelope and its blood supply are preserved. To date, we have used this approach either alone or in combination with a posteromedial approach for the successful reduction of tibial plateau fractures in eight patients. No complications related to this approach were documented, including no symptoms related to the common peroneal nerve, and all fractures and fibular neck osteotomies healed uneventfully.",
"title": ""
}
] |
[
{
"docid": "8954672b2e2b6351abfde0747fd5d61c",
"text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.",
"title": ""
},
{
"docid": "670d4860fc3172b7ffa429268462b64d",
"text": "This article describes the benefits and risks of providing RPDs. It emphasises the importance of co-operation between the dental team and patient to ensure that the balance of this 'equation' is in the patient's favour.",
"title": ""
},
{
"docid": "496a7b9155ad336e178a62545b7eb0b7",
"text": "A B S T R AC T Existing approaches to organizational discourse, which we label as ‘managerialist’, ‘interpretive’ and ‘critical’, either privilege agency at the expense of structure or the other way around. This tension reflects that between approaches to discourse in the social sciences more generally but is sharper in the organizational context, where discourse is typically temporally and contextually specific and imbued with attributions of instrumental intent. As the basis for a more sophisticated understanding of organizational discourse, we draw on the work of Giddens to develop a structurational conceptualization in which discourse is viewed as a duality of communicative actions and structural properties, recursively linked through the modality of actors’ interpretive schemes. We conclude by exploring some of the theoretical implications of this conceptualization and its consequences for the methodology of organizational discourse analysis.",
"title": ""
},
{
"docid": "af9e3268901a46967da226537eba3cb6",
"text": "Magnetic Resonance Imaging (MRI) is a non-invasive diagnostic tool very frequently used for brain 8 imaging. The classification of MRI images of normal and pathological brain conditions pose a challenge from 9 technological and clinical point of view, since MR imaging focuses on soft tissue anatomy and generates a large 10 information set and these can act as a mirror reflecting the conditions of the brain. A new approach by 11 integrating wavelet entropy based spider web plots and probabilistic neural network is proposed for the 12 classification of MRI brain images. The two step method for classification uses (1) wavelet entropy based spider 13 web plots for the feature extraction and (2) probabilistic neural network for the classification. The spider web 14 plot is a geometric construction drawn using the entropy of the wavelet approximation components and the areas 15 calculated are used as feature set for classification. Probabilistic neural network provides a general solution to 16 the pattern classification problems and the classification accuracy is found to be 100%. 17 Keywords-Magnetic Resonance Imaging (MRI), Wavelet Transformation, Entropy, Spider Web Plots, 18 Probabilistic Neural Network 19",
"title": ""
},
{
"docid": "8e3cc3937f91c12bb5d515f781928f8b",
"text": "As the size of data set in cloud increases rapidly, how to process large amount of data efficiently has become a critical issue. MapReduce provides a framework for large data processing and is shown to be scalable and fault-tolerant on commondity machines. However, it has higher learning curve than SQL-like language and the codes are hard to maintain and reuse. On the other hand, traditional SQL-based data processing is familiar to user but is limited in scalability. In this paper, we propose a hybrid approach to fill the gap between SQL-based and MapReduce data processing. We develop a data management system for cloud, named SQLMR. SQLMR complies SQL-like queries to a sequence of MapReduce jobs. Existing SQL-based applications are compatible seamlessly with SQLMR and users can manage Tera to PataByte scale of data with SQL-like queries instead of writing MapReduce codes. We also devise a number of optimization techniques to improve the performance of SQLMR. The experiment results demonstrate both performance and scalability advantage of SQLMR compared to MySQL and two NoSQL data processing systems, Hive and HadoopDB.",
"title": ""
},
{
"docid": "8c174dbb8468b1ce6f4be3676d314719",
"text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.",
"title": ""
},
{
"docid": "d4fb67823dd774e3efc25de61b8e503c",
"text": "Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay’s classic phenomenological model. We have made new measurements that exhibit visually significant effects not predicted by Kajiya and Kay’s model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model’s ability to match the appearance of real hair. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Shading",
"title": ""
},
{
"docid": "0e0f78b8839f4724153b8931342824d2",
"text": "The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.",
"title": ""
},
{
"docid": "4f6f225f978bbf00c20f80538dc12aad",
"text": "A smart building is created when it is engineered, delivered and operated smart. The Internet of Things (IoT) is advancing a new breed of smart buildings enables operational systems that deliver more accurate and useful information for improving operations and providing the best experiences for tenants. Big Data Analytics framework analyze building data to uncover new insight capable of driving real value and greater performance. Internet of Things technologies enhance the situational awareness or “smartness” of service providers and consumers alike. There is a need for an integrated IoT Big Data Analytics framework to fill the research gap in the Big Data Analytics domain. This paper also presents a novel approach for mobile phone centric observation applied to indoor localization for smart buildings. The applicability of the framework of this paper is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. Lighting control in smart buildings and homes can be automated by having computer controlled lights and blinds along with illumination sensors that are distributed in the building. This paper gives an overview of an approach that algorithmically sets up the control system that can automate any building without custom programming. The resulting system controls blinds to ensure even lighting and also adds artificial illumination to ensure light coverage remains adequate at all times of the day, adjusting for weather and seasons. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain.",
"title": ""
},
{
"docid": "97f89b905d51d2965c60bb4bbed08b4c",
"text": "This communication deals with simultaneous generation of a contoured and a pencil beam from a single shaped reflector with two feeds. A novel concept of generating a high gain pencil beam from a shaped reflector is presented using focal plane conjugate field matching method. The contoured beam is generated from the shaped reflector by introducing deformations in a parabolic reflector surface. This communication proposes a simple method to counteract the effects of shaping and generate an additional high gain pencil beam from the shaped reflector. This is achieved by using a single feed which is axially and laterally displaced from the focal point. The proposed method is successfully applied to generate an Indian main land coverage contoured beam and a high gain pencil beam over Andaman Islands. The contoured beam with peak gain of 33.05 dBi and the pencil beam with 43.8 dBi peak gain is generated using the single shaped reflector and two feeds. This technique saves mass and volume otherwise would have required for feed cluster to compensate for the surface distortion.",
"title": ""
},
{
"docid": "6177d208d27ecc9dee54b988d1c2bc2d",
"text": "Animal learning is driven not only by biological needs but also by intrinsic motivations (IMs) serving the acquisition of knowledge. Computational modeling involving IMs is indicating that learning of motor skills requires that autonomous agents self-generate tasks/goals and use them to acquire skills solving/leading to them. We propose a neural architecture driven by IMs that is able to self-generate goals on the basis of the environmental changes caused by the agent’s actions. The main novelties of the model are that it is focused on the acquisition of attention (looking) skills and that its architecture and functioning are broadly inspired by the functioning of relevant primate brain areas (superior colliculus, basal ganglia, and frontal cortex). These areas, involved in IM-based behavior learning, play important functions for reflexive and voluntary attention. The model is tested within a simple simulated pan-tilt camera robot engaged in learning to switch on different lights by looking at them, and is able to self-generate visual goals and learn attention skills under IM guidance. The model represents a novel hypothesis on how primates and robots might autonomously learn attention skills and has a potential to account for developmental psychology experiments and the underlying brain mechanisms.",
"title": ""
},
{
"docid": "0bd981ea6d38817b560383f48fdfb729",
"text": "Lightweight wheelchairs are characterized by their low cost and limited range of adjustment. Our study evaluated three different folding lightweight wheelchair models using the American National Standards Institute/Rehabilitation Engineering Society of North America (ANSI/RESNA) standards to see whether quality had improved since the previous data were reported. On the basis of reports of increasing breakdown rates in the community, we hypothesized that the quality of these wheelchairs had declined. Seven of the nine wheelchairs tested failed to pass the multidrum test durability requirements. An average of 194,502 +/- 172,668 equivalent cycles was completed, which is similar to the previous test results and far below the 400,000 minimum required to pass the ANSI/RESNA requirements. This was also significantly worse than the test results for aluminum ultralight folding wheelchairs. Overall, our results uncovered some disturbing issues with these wheelchairs and suggest that manufacturers should put more effort into this category to improve quality. To improve the durability of lightweight wheelchairs, we suggested that stronger regulations be developed that require wheelchairs to be tested by independent and certified test laboratories. We also proposed a wheelchair rating system based on the National Highway Transportation Safety Administration vehicle crash ratings to assist clinicians and end users when comparing the durability of different wheelchairs.",
"title": ""
},
{
"docid": "7cc20934720912ad1c056dc9afd97e18",
"text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.",
"title": ""
},
{
"docid": "73723bf217557d8269cb0c23140e2ec9",
"text": "The uniform one-dimensional fragment of first-order logic, U1, is a recently introduced formalism that extends two-variable logic in a natural way to contexts with relations of all arities. We survey properties of U1 and investigate its relationship to description logics designed to accommodate higher arity relations, with particular attention given to DLRreg . We also define a description logic version of a variant of U1 and prove a range of new results concerning the expressivity of U1 and related logics.",
"title": ""
},
{
"docid": "1450c2025de3ea31271c9d6c56be016f",
"text": "The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.",
"title": ""
},
{
"docid": "7687f85746acf4e3cd24d512e5efd31e",
"text": "Thyroid eye disease is a multifactorial autoimmune disease with a spectrum of signs and symptoms. Oftentimes, the diagnosis of thyroid eye disease is straightforward, based upon history and physical examination. The purpose of this review is to assist the eye-care practitioner in staging the severity of thyroid eye disease (mild, moderate-to-severe and sight-threatening) and correlating available treatment modalities. Eye-care practitioners play an important role in the multidisciplinary team by assessing functional vision while also managing ocular health.",
"title": ""
},
{
"docid": "265421a07efc8ab26a6766f90bf53245",
"text": "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.\n In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "78f272578191996200259e10d209fe19",
"text": "The information in government web sites, which are widely adopted in many countries, must be accessible for all people, easy to use, accurate and secure. The main objective of this study is to investigate the usability, accessibility and security aspects of e-government web sites in Kyrgyz Republic. The analysis of web government pages covered 55 sites listed in the State Information Resources of the Kyrgyz Republic and five government web sites which were not included in the list. Analysis was conducted using several automatic evaluation tools. Results suggested that government web sites in Kyrgyz Republic have a usability error rate of 46.3 % and accessibility error rate of 69.38 %. The study also revealed security vulnerabilities in these web sites. Although the “Concept of Creation and Development of Information Network of the Kyrgyz Republic” was launched at September 23, 1994, government web sites in the Kyrgyz Republic have not been reviewed and still need great efforts to improve accessibility, usability and security.",
"title": ""
},
{
"docid": "531a7417bd66ff0fdd7fb35c7d6d8559",
"text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.",
"title": ""
}
] |
scidocsrr
|
ba888cd26ac294f48876e5cf28116136
|
Adaptive Grids for Clustering Massive Data Sets
|
[
{
"docid": "b7a4eec912eb32b3b50f1b19822c44a1",
"text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] |
[
{
"docid": "546f96600d90107ed8262ad04274b012",
"text": "Large-scale labeled training datasets have enabled deep neural networks to excel on a wide range of benchmark vision tasks. However, in many applications it is prohibitively expensive or timeconsuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled target domain. Unfortunately, direct transfer across domains often performs poorly due to domain shift and dataset bias. Domain adaptation is the machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, we summarize and compare the latest unsupervised domain adaptation methods in computer vision applications. We classify the non-deep approaches into sample re-weighting and intermediate subspace transformation categories, while the deep strategy includes discrepancy-based methods, adversarial generative models, adversarial discriminative models and reconstruction-based methods. We also discuss some potential directions.",
"title": ""
},
{
"docid": "7086861716db2b7d0841ad85199683ce",
"text": "AIM\nAlthough children spend most of their time involved in activities related to school, few studies have focused on the association between school social environment and oral health. This cross-sectional study assessed individual and school-related social environment correlates of dental caries in Brazilian schoolchildren aged 8-12 years.\n\n\nMETHODS\nA sample of children from 20 private and public schools (n=1,211) was selected. Socio-economic data were collected from parents, and data regarding children characteristics were collected from children using a questionnaire. Dental examinations were performed to assess the presence of dental plaque: dental caries experience (DMFT≥1) and dental caries severity (mean dmf-t/DMF-T). The social school environment was assessed by a questionnaire administered to school coordinators. Multilevel Poisson regression was used to investigate the association between school social environment and dental caries prevalence and experience.\n\n\nRESULTS\nThe dental caries prevalence was 32.4% (95% confidence interval: 29.7-35.2) and the mean dmf-t/DMF-T was 1.84 (standard deviation: 2.2). Multilevel models showed that the mean dmf-t/DMF-T and DMFT≥1 were associated with lower maternal schooling and higher levels of dental plaque. For contextual variables, schools offering after-hours sports activities were associated with a lower prevalence of dental caries and a lower mean of dmf-t/DMF-T, while the occurrence of violence and theft episodes was positively associated with dental caries.\n\n\nCONCLUSIONS\nThe school social environment has an influence on dental caries in children. The results suggest that strategies focused on the promotion of healthier environments should be stimulated to reduce inequalities in dental caries.",
"title": ""
},
{
"docid": "688bacdee25152e1de6bcc5005b75d9a",
"text": "Data Mining provides powerful techniques for various fields including education. The research in the educational field is rapidly increasing due to the massive amount of students’ data which can be used to discover valuable pattern pertaining students’ learning behaviour. This paper proposes a framework for predicting students’ academic performance of first year bachelor students in Computer Science course. The data were collected from 8 year period intakes from July 2006/2007 until July 2013/2014 that contains the students’ demographics, previous academic records, and family background information. Decision Tree, Naïve Bayes, and Rule Based classification techniques are applied to the students’ data in order to produce the best students’ academic performance prediction model. The experiment result shows the Rule Based is a best model among the other techniques by receiving the highest accuracy value of 71.3%. The extracted knowledge from prediction model will be used to identify and profile the student to determine the students’ level of success in the first semester.",
"title": ""
},
{
"docid": "e591165d8e141970b8263007b076dee1",
"text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record",
"title": ""
},
{
"docid": "49fddbf79a836e2ae9f297b32fb3681d",
"text": "Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentiongan.",
"title": ""
},
{
"docid": "87c56a28428132d4023c312ce216fd04",
"text": "The era of big data has resulted in the development and applications of technologies and methods aimed at effectively using massive amounts of data to support decision-making and knowledge discovery activities. In this paper, the five Vs of big data, volume, velocity, variety, veracity, and value, are reviewed, as well as new technologies, including NoSQL databases that have emerged to accommodate the needs of big data initiatives. The role of conceptual modeling for big data is then analyzed and suggestions made for effective conceptual modeling efforts with respect to big data. & 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "904d175ba1f94a980ceb88f9941f0a55",
"text": "Currently, wind turbines can incur unforeseen damage up to five times a year. Particularly during bad weather, wind turbines located offshore are difficult to access for visual inspection. As a result, long periods of turbine standstill can result in great economic inefficiencies that undermine the long-term viability of the technology. Hence, the load carrying structure should be monitored continuously in order to minimize the overall cost of maintenance and repair. The end result are turbines defined by extend lifetimes and greater economic viability. For that purpose, an automated monitoring system for early damage detection and damage localisation is currently under development for wind turbines. Most of the techniques existing for global damage detection of structures work by using frequency domain methods. Frequency shifts and mode shape changes are usually used for damage detection of large structures (e.g. bridges, large buildings and towers) [1]. Damage can cause a change in the distribution of structural stiffness which has to be detected by measuring dynamic responses using natural excitation. Even though mode shapes are more sensitive to damage compared to frequency shifts, the use of mode shapes requires a lot of sensors installed so as to reliably detect mode shape changes for early damage detection [2]. The design of our developed structural health monitoring (SHM) system is based on three functional modules that track changes in the global dynamic behaviour of both the turbine tower and blade elements. A key feature of the approach is the need for a minimal number of strain gages and accelerometers necessary to record the structure’s condition. Module 1 analyzes the proportionality of maximum stress and maximum velocity; already small changes in component stiffness can be detected. Afterwards, module 3 is activated for localization and quantization of the damage. The approach of module 3 is based on a numerical model which solves a multi-parameter eigenvalue problem. As a prerequisite, highly resolved eigenfrequencies and a parameterization of a validated structural model are required. Both are provided for the undamaged structure by module 2",
"title": ""
},
{
"docid": "c20733b414a1b39122ef54d161885d81",
"text": "This paper discusses the role of clusters and focal firms in the economic performance of small firms in Italy. Using the example of the packaging industry of northern Italy, it shows how clusters of small firms have emerged around a few focal or leading companies. These companies have helped the clusters grow and diversify through technological and managerial spillover effects, through the provision of purchase orders, and sometimes through financial links. The role of common local training institutes, whose graduates often start up small firms within the local cluster, is also discussed.",
"title": ""
},
{
"docid": "866f1b980b286f6ed3ace9caf0dc415a",
"text": "In this letter, we propose a road structure refined convolutional neural network (RSRCNN) approach for road extraction in aerial images. In order to obtain structured output of road extraction, both deconvolutional and fusion layers are designed in the architecture of RSRCNN. For training RSRCNN, a new loss function is proposed to incorporate the geometric information of road structure in cross-entropy loss, thus called road-structure-based loss function. Experimental results demonstrate that the trained RSRCNN model is able to advance the state-of-the-art road extraction for aerial images, in terms of precision, recall, F-score, and accuracy.",
"title": ""
},
{
"docid": "d86aa00419ad3773c1f3f27e076c2ba6",
"text": "Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.",
"title": ""
},
{
"docid": "83b5da6ab8ab9a906717fda7aa66dccb",
"text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.",
"title": ""
},
{
"docid": "c2fd86b36364ac9c40e873176443c4c8",
"text": "In a public service announcement on 17 March 2016, the Federal Bureau of Investigation jointly with the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) released a warning regarding the increasing vulnerability of motor vehicles to remote exploits [18]. Engine shutdowns, disabled brakes, and locked doors are a few examples of possible vehicle cybersecurity attacks. Modern cars grow into a new target for cyberattacks as they become increasingly connected. While driving on the road, sharks (i.e., hackers) need only to be within communication range of a vehicle to attack it. However, in some cases, they can hack into it while they are miles away. In this article, we aim to illuminate the latest vehicle cybersecurity threats including malware attacks, on-board diagnostic (OBD) vulnerabilities, and automobile apps threats. We illustrate the in-vehicle network architecture and demonstrate the latest defending mechanisms designed to mitigate such threats.",
"title": ""
},
{
"docid": "395d21b52ff74935fffcc1924aec5c0f",
"text": "The desire to take medicines is one feature which distinguishes man, the animal, from his fellow creatures (1). Thus did William Osler express skepticism about remedies available in the early 20th century and an avuncular indulgence toward patients who wanted them. His comment expresses the attitude of many physicians today toward consumers of herbal medicines, and indeed may be timeless: Medicinal herbs were found in the personal effects of an ice man, whose body was frozen in the Swiss Alps for more than 5000 years (2). Since these herbs appear to have treated the parasites found in his intestine (2), the desire to take medicines may signify a timeless quest for cures that flowers today in the form of widely acclaimed new drugs. The effectiveness of a modern drug is ultimately judged by the results of clinical trials. Ordinarily, such trials are designed to test the assumption that a drug's pharmacologic activity will favorably affect a disease process, which in turn is viewed in terms of a physiologic model. Clinical trials yield convincing results, however, only if they are conducted in accordance with principles that, for example, ensure elimination of bias and reduce the possibility that results occurred merely by chance. Trials must also use drug preparations with consistent pharmacologic properties. These principles apply to all drugs, whether they originate as traditional remedies or in precepts of molecular biology. Indeed, such principles have successfully guided digitalis from medicinal plant to modern drug; we might ask, therefore, how these principles apply to the evaluation of today's herbal medicines. Digitalis: From Folk Remedy to Modern Drug Withering, who introduced foxglove to the medical profession in 1785 (3), took the first steps in transforming digitalis from a folk remedy to a modern drug when he simplified a family receipt for dropsy that contained more than 20 substances (3) by assuming that foxglove was the active ingredient. Careful clinical observations then enabled him to recognize the plant's slim margin of safety and thus the importance of dose: just enough foxglove to cause diuresis, but not enough to cause vomiting or very slow pulse. Bioassays and Chemical Standardization By the early 20th century, it was understood that activities of medicines derived from foxglove were influenced by such factors as the time when the leaves are gathered, and climatic and soil conditions [as well as] the manner in which the drug is prepared for the market (4). Clearly, plants have ingredients with therapeutic activity, but their preparations must be standardized to yield consistent products, which therefore can be given in doses that are maximally safe and effective. In 1906, the pharmacopeia contained a daunting number of digitalis preparationsfor example, Digitin, Extractum Digitalis, and Infusum Digitalis (5)whose potency had never been investigated. When these preparations were investigated by using a new bioassay based on the fact that digitalis causes asystole in the frog, the results were surprising: The potencies of 16 commercial digital preparations varied over a fourfold range (4). Fortunately, the bioassay also provided a way to control this problem, and the frog bioassay was soon officially adopted by the United States Pharmacopeia to standardize digitalis preparations. This bioassay, which indicated the importance of laboratory studies for the emerging science of pharmacology, provided the means to standardize the potency of a chemically complex herbal medicine, even when its active ingredients were uncertain. Soon the quest for even better methods of standardizing digitalis yielded several dozen bioassays in more than six different animal species (6). Thus, the cat heart assay replaced the frog heart assay, which in turn was replaced by the pigeon assay. The ultimate bioassay, however, was done in humans; it was based on the digitalis-induced changes in a patient's electrocardiogram (7). Although digoxin, now the preferred form of digitalis, can be standardized chemically, a bioassay of sorts is still required to establish its bioavailability (8) and, hence, the pharmaceutical standardization needed to carry out the clinical trials that shape our current perspective on the drug (9). Herbal Remedies in the United States Today Challenges in Standardizing Herbal Medicines Unfortunately, standardization methods such as those described for digitalis are not suitable for many herbs. Bioassays must be based on biological models, which are not available for the health claims made for many popular herbs, and chemical analysis has limited value when the ingredients responsible for a plant's activity have not been identified. In addition, if the active ingredient of an herb were known, it would remain unclear whether the crude herb would be preferable to its purified active principle. In the absence of definitive information in this regard, such traditional herbal preparations as digitalis leaf and opium have been replaced by such drugs as digoxin and codeine, respectively. How can an herb be standardized if its active ingredients are not known and there is no suitable bioassay? EGb 761, a patented extract of Ginkgo biloba, is a commendable attempt to solve this problem and to achieve a consistent formulation of ginkgo. Thus, EGb 761 sets feasible standards for how and where ginkgo is grown and harvested, how the leaves are extracted, and the target values for several chemical constituents of the medicinal product (10). EGb 761, which aims for chemical consistency and, presumably, therapeutic consistency, was used in three of four studies that, on the basis of a meta-analysis, concluded that ginkgo conferred a small but significant benefit in patients with Alzheimer disease (11). In the absence of evidence to the contrary, those who hope to replicate these trial results would justifiably select this ginkgo product in preference to others with less well-specified standards of botanical and chemical consistency. Recent studies with St. John's wort, however, remind us of the potential pitfalls of standardizing a medicinal herb to constituents that may not be responsible for therapeutic activity. For years, St. John's wort, which meta-analysis finds superior to placebo for treatment of mild to moderate depression (12), has been standardized by its content of hypericin. Hypericin, however, has never been confirmed as the herb's active ingredient and may be no more than a characteristic ingredient of the plant, useful for botanical verification but not necessarily for therapeutic standardization. Another constituent of St. John's wort, hyperforin, now appears to be a more potent antidepressant than hypericin. Thus, the potency of various St. John's wort extracts for inhibiting the neuronal uptake of serotonin, a characteristic of conventional antidepressants such as fluoxetine, increases with increasing hyperforin content. Studies in animal models of depression (13) and patients with mild to moderate depression (14) suggest that antidepressant activity is related to content of hyperforin, not hypericin. For example, a three-arm clinical trial of 147 patients that compared two St. John's wort extracts of equal hypericin content with placebo found antidepressant activity to be higher for the extract that had a 10-fold higher hyperforin content (14). Although this trial was relatively small and therefore of limited statistical significance, its results suggest that antidepressant activity demonstrated in a meta-analysis of past studies (12) may have resulted from the fortuitous inclusion of hyperforin in many of the St. John's wort formulations included. If the active ingredient of St. John's wort products used in these studies was not optimized, the studies as a group would undoubtedly underestimate the potential antidepressant activity of St. John's wort. Additional evidence suggests that the consumer is not receiving the full possible benefit of St. John's wort. On a recent visit to a local food store, I found St. John's wort preparations that were reminiscent of digitalis formulations at the beginning of the 20th century. Some were said to contain 0.3% (300 mg) hypericin, another was a liquid formulation containing 180 mg of hypericins, and a third contained 0.3% (450 mg) hypericin. The highest content of 0.3% hypericin was 530 mg. Yet another product carried the label St. John's wort, but its contents were not quantified. Hyperforin content was listed only for some products, whereas other products indicated that St. John's wort had been combined with such ingredients as kava, Echinacea, licorice root, or coconut. The parts of the plant used in the preparations were described as leaf, flowers and stem, aerial parts, or simply fl ers and leaf. Although labels on some St. John's wort products indicated an awareness of recent studies on hyperforin, other labels confirmed that there is no barrier to selling herbal preparations of doubtful scientific rationale and uncertain potency. Clinical Trials of Herbs Randomized clinical trials have become the gold standard for evaluating the efficacy of a drug and have assumed a similar status for evaluating an herbal remedy. Although the methodology of herbal trials is improving, some studies cited in herbal compendia have shortcomings. One problem is that results of herbal trials often do not reach statistical significance because they enroll fewer participants than trials of a conventional drug, and the role of chance may be overlooked in interpreting such trials. For example, the results of clinical studies were recently examined to determine whether parthenolide, a characteristic component of feverfew, was necessary for feverfew's apparent role in prevention of migraine. It was reasoned (15) that parthenolide could not be the sole active ingredient of feverfew because the parthenolide content of the feverfew preparation used in one negative trial (16) ",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "c48d0c94d3e97661cc2c944cc4b61813",
"text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.",
"title": ""
},
{
"docid": "01cd8355e0604868659e1a312d385ebe",
"text": "In the past years, knowledge graphs have proven to be beneficial for recommender systems, efficiently addressing paramount issues such as new items and data sparsity. At the same time, several works have recently tackled the problem of knowledge graph completion through machine learning algorithms able to learn knowledge graph embeddings. In this paper, we show that the item recommendation problem can be seen as a specific case of knowledge graph completion problem, where the “feedback” property, which connects users to items that they like, has to be predicted. We empirically compare a set of state-of-the-art knowledge graph embeddings algorithms on the task of item recommendation on the Movielens 1M dataset. The results show that knowledge graph embeddings models outperform traditional collaborative filtering baselines and that TransH obtains the best performance.",
"title": ""
},
{
"docid": "974d7b697942a8872b01d7b5d2302750",
"text": "Purpose – This study provides insights into corporate achievements in supply chain management (SCM) and logistics management and details how they might help disaster agencies. The authors highlight and identify current practices, particularities, and challenges in disaster relief supply chains. Design/methodology/approach – Both SCM and logistics management literature and examples drawn from real-life cases inform the development of the theoretical model. Findings – The theoretical, dual-cycle model that focuses on the key missions of disaster relief agencies: first, prevention and planning and, second, response and recovery. Three major contributions are offered: (1) a concise representation of current practices and particularities of disaster relief supply chains compared with commercial SCM; (2) challenges and barriers to the development of more efficient SCM practices, classified into learning, strategizing, and coordinating and measurement issues; and (3) a simple, functional model for understanding how collaborations between corporations and disaster relief agencies might help relief agencies meet SCM challenges. Research limitations/implications – The study does not address culture clash–related considerations. Rather than representing the entire scope of real-life situations and practices, the analysis relies on key assumptions to help conceptualize collaborative paths.",
"title": ""
},
{
"docid": "e0f7c82754694084c6d05a2d37be3048",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "6549a00df9fadd56b611ee9210102fe8",
"text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.",
"title": ""
}
] |
scidocsrr
|
edd789fe06013fdf37a87659ca7d5b82
|
Context-Based Few-Shot Word Representation Learning
|
[
{
"docid": "49387b129347f7255bf77ad9cc726275",
"text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] |
[
{
"docid": "2fa6f761f22e0484a84f83e5772bef40",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "8c308305b4a04934126c4746c8333b52",
"text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.",
"title": ""
},
{
"docid": "9c30ef5826b413bab262b7a0884eb119",
"text": "In this survey paper, we review recent uses of convolution neural networks (CNNs) to solve inverse problems in imaging. It has recently become feasible to train deep CNNs on large databases of images, and they have shown outstanding performance on object classification and segmentation tasks. Motivated by these successes, researchers have begun to apply CNNs to the resolution of inverse problems such as denoising, deconvolution, super-resolution, and medical image reconstruction, and they have started to report improvements over state-of-the-art methods, including sparsity-based techniques such as compressed sensing. Here, we review the recent experimental work in these areas, with a focus on the critical design decisions: Where does the training data come from? What is the architecture of the CNN? and How is the learning problem formulated and solved? We also bring together a few key theoretical papers that offer perspective on why CNNs are appropriate for inverse problems and point to some next steps in the field.",
"title": ""
},
{
"docid": "15e2fc773fb558e55d617f4f9ac22f69",
"text": "Recent advances in ASR and spoken language processing have led to improved systems for automated assessment for spoken language. However, it is still challenging for automated scoring systems to achieve high performance in terms of the agreement with human experts when applied to non-native children’s spontaneous speech. The subpar performance is mainly caused by the relatively low recognition rate on non-native children’s speech. In this paper, we investigate different neural network architectures for improving non-native children’s speech recognition and the impact of the features extracted from the corresponding ASR output on the automated assessment of speaking proficiency. Experimental results show that bidirectional LSTM-RNN can outperform feed-forward DNN in ASR, with an overall relative WER reduction of 13.4%. The improved speech recognition can then boost the language proficiency assessment performance. Correlations between the rounded automated scores and expert scores range from 0.66 to 0.70 for the three speaking tasks studied, similar to the humanhuman agreement levels for these tasks.",
"title": ""
},
{
"docid": "3ec1da9b86b3338b1ad4890add51a20b",
"text": "In this paper, we present the dynamic modeling and controller design of a tendon-driven system that is antagonistically driven by elastic tendons. In the dynamic modeling, the tendons are approximated as linear axial springs, neglecting their masses. An overall equation for motion is established by following the Euler–Lagrange formalism of dynamics, combined with rigid-body rotation and vibration. The controller is designed using the singular perturbation approach, which leads to a composite controller (i.e., consisting of a fast sub-controller and a slow sub-controller). An appropriate internal force is superposed to the control action to ensure the tendons to be in tension for all configurations. Experimental results are provided to demonstrate the validity and effectiveness of the proposed controller for the antagonistic tendon-driven system.",
"title": ""
},
{
"docid": "76f3c76572e46131354707b2da7f55b6",
"text": "Purpose – Competitive environment and numerous stakeholders’ pressures are forcing hotels to comply their operations with the principles of sustainable development, especially in the field of environmental responsibility. Therefore, more and more of them incorporate environmental objectives in their business policies and strategies. The fulfilment of the environmental objectives requires the hotel to develop and implement environmentally sustainable business practices, as well as to implement reliable tools to assess environmental impact, of which environmental accounting and reporting are particularly emphasized. The purpose of this paper is to determine the development of hotel environmental accounting practices, based on previous research and literature review. Approach – This paper provides an overview of current research in the field of hotel environmental accounting and reporting, based on established knowledge about hotel environmental responsibility. The research has been done according to the review of articles in academic journals. Conclusions about the requirements for achieving hotel long-term sustainability have been drawn. Findings – Previous studies have shown that environmental accounting and reporting practice in hotel business is weaker when compared to other activities, and that most hotels still insufficiently use the abovementioned instruments of environmental management to reduce their environmental footprint and to improve their relationship with stakeholders. The paper draws conclusions about possible perspectives that environmental accounting has in ensuring hotel sustainability. Originality – The study provides insights into the problem of environmental responsibility of hotels, from the standpoint of environmental accounting and reporting, as tools for assessing hotel impact on the environment and for improving its environmentally sustainable business practice. The ideas for improving hotel environmental efficiency are shaped based on previous findings.",
"title": ""
},
{
"docid": "ba203abd0bd55fc9d06fe979a604d741",
"text": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on largescale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.",
"title": ""
},
{
"docid": "0441fb016923cd0b7676d3219951c230",
"text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.",
"title": ""
},
{
"docid": "3bdd6168db10b8b195ce88ae9c4a75f9",
"text": "Nowadays Intrusion Detection System (IDS) which is increasingly a key element of system security is used to identify the malicious activities in a computer system or network. There are different approaches being employed in intrusion detection systems, but unluckily each of the technique so far is not entirely ideal. The prediction process may produce false alarms in many anomaly based intrusion detection systems. With the concept of fuzzy logic, the false alarm rate in establishing intrusive activities can be reduced. A set of efficient fuzzy rules can be used to define the normal and abnormal behaviors in a computer network. Therefore some strategy is needed for best promising security to monitor the anomalous behavior in computer network. In this paper I present a few research papers regarding the foundations of intrusion detection systems, the methodologies and good fuzzy classifiers using genetic algorithm which are the focus of current development efforts and the solution of the problem of Intrusion Detection System to offer a realworld view of intrusion detection. Ultimately, a discussion of the upcoming technologies and various methodologies which promise to improve the capability of computer systems to detect intrusions is offered.",
"title": ""
},
{
"docid": "904454a191da497071ee9b835561c6e6",
"text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.",
"title": ""
},
{
"docid": "5ed525a96ab5663ca8df698e275620f2",
"text": "Most video-based action recognition approaches choose to extract features from the whole video to recognize actions. The cluttered background and non-action motions limit the performances of these methods, since they lack the explicit modeling of human body movements. With recent advances of human pose estimation, this work presents a novel method to recognize human action as the evolution of pose estimation maps. Instead of relying on the inaccurate human poses estimated from videos, we observe that pose estimation maps, the byproduct of pose estimation, preserve richer cues of human body to benefit action recognition. Specifically, the evolution of pose estimation maps can be decomposed as an evolution of heatmaps, e.g., probabilistic maps, and an evolution of estimated 2D human poses, which denote the changes of body shape and body pose, respectively. Considering the sparse property of heatmap, we develop spatial rank pooling to aggregate the evolution of heatmaps as a body shape evolution image. As body shape evolution image does not differentiate body parts, we design body guided sampling to aggregate the evolution of poses as a body pose evolution image. The complementary properties between both types of images are explored by deep convolutional neural networks to predict action label. Experiments on NTU RGB+D, UTD-MHAD and PennAction datasets verify the effectiveness of our method, which outperforms most state-of-the-art methods.",
"title": ""
},
{
"docid": "1d8f7705ba0dd969ed6de9e7e6a9a419",
"text": "A Mecanum-wheeled robot benefits from great omni-direction maneuverability. However it suffers from random slippage and high-speed vibration, which creates electric power safety, uncertain position errors and energy waste problems for heavy-duty tasks. A lack of Mecanum research on heavy-duty autonomous navigation demands a robot platform to conduct experiments in the future. This paper introduces AuckBot, a heavy-duty omni-directional Mecanum robot platform developed at the University of Auckland, including its hardware overview, the control system architecture and the simulation design. In particular the control system, synergistically combining the Beckhoff system as the Controller-PC to serve low-level motion execution and ROS as the Navigation-PC to accomplish highlevel intelligent navigation tasks, is developed. In addition, a computer virtual simulation based on ISG-virtuos for virtual AuckBot has been validated. The present status and future work of AuckBot are described at the end.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "d4cdea26217e90002a3c4522124872a2",
"text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.",
"title": ""
},
{
"docid": "d64179da43db5f5bd15ff7e31e38d391",
"text": "Real-world graph applications are typically domain-specific and model complex business processes in the property graph data model. To implement a domain-specific graph algorithm in the context of such a graph application, simply providing a set of built-in graph algorithms is usually not sufficient nor does it allow algorithm customization to the user's needs. To cope with these issues, graph database vendors provide---in addition to their declarative graph query languages---procedural interfaces to write user-defined graph algorithms.\n In this paper, we introduce GraphScript, a domain-specific graph query language tailored to serve advanced graph analysis tasks and the specification of complex graph algorithms. We describe the major language design of GraphScript, discuss graph-specific optimizations, and describe the integration into an enterprise data platform.",
"title": ""
},
{
"docid": "208f426b5e60fb73b5f49e86f942e98f",
"text": "Using the contemporary view of computing exemplified by recent models and results from non-uniform complexity theory, we investigate the computational power of cognitive systems. We show that in accordance with the so-called extended Turing machine paradigm such systems can be modelled as non-uniform evolving interactive systems whose computational power surpasses that of the classical Turing machines. Our results show that there is an infinite hierarchy of cognitive systems. Within this hierarchy, there are systems achieving and surpassing the human intelligence level. Any intelligence level surpassing the human intelligence is called the superintelligence level. We will argue that, formally, from a computation viewpoint the human-level intelligence is upper-bounded by the $$\\Upsigma_2$$ class of the Arithmetical Hierarchy. In this class, there are problems whose complexity grows faster than any computable function and, therefore, not even exponential growth of computational power can help in solving such problems, or reach the level of superintelligence.",
"title": ""
},
{
"docid": "87e3727df4e8d7f275695da161b0d924",
"text": "Self-determination theory (SDT; Deci & Ryan, 2000) proposes that intrinsic, relative to extrinsic, goal content is a critical predictor of the quality of an individual's behavior and psychological well-being. Through three studies, we developed and psychometrically tested a measure of intrinsic and extrinsic goal content in the exercise context: the Goal Content for Exercise Questionnaire (GCEQ). In adults, exploratory (N = 354; Study 1) and confirmatory factor analyses (N = 312; Study 2) supported a 20-item solution consisting of 5 lower order factors (i.e., social affiliation, health management, skill development, image and social recognition) that could be subsumed within a 2-factor higher order structure (i.e., intrinsic and extrinsic). Evidence for external validity, temporal stability, gender invariance, and internal consistency of the GCEQ was found. An independent sample (N = 475; Study 3) provided further support for the lower order structure of the GCEQ and some support for the higher order structure. The GCEQ was supported as a measure of exercise-based goal content, which may help understand how intrinsic and extrinsic goals can motivate exercise behavior.",
"title": ""
},
{
"docid": "2ecd0bf132b3b77dc1625ef8d09c925b",
"text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.",
"title": ""
},
{
"docid": "e541ae262655b7f5affefb32ce9267ee",
"text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.",
"title": ""
},
{
"docid": "cb1645b5b37e99a1dac8c6af1d6b1027",
"text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective countermeasures have drawn significant investment from governments, companies, and researchers. A large number of methods have been developed for automated hate speech detection online. This aims to classify textual content into non-hate or hate speech, in which case the method may also identify the targeting characteristics (i.e., types of hate, such as race, and religion) in the hate speech. However, we notice significant difference between the performance of the two (i.e., non-hate v.s. hate). In this work, we argue for a focus on the latter problem for practical reasons. We show that it is a much more challenging task, as our analysis of the language in the typical datasets shows that hate speech lacks unique, discriminative features and therefore is found in the ‘long tail’ in a dataset that is difficult to discover. We then propose Deep Neural Network structures serving as feature extractors that are particularly effective for capturing the semantics of hate speech. Our methods are evaluated on the largest collection of hate speech datasets based on Twitter, and are shown to be able to outperform the best performing method by up to 5 percentage points in macro-average F1, or 8 percentage points in the more challenging case of identifying hateful content.",
"title": ""
}
] |
scidocsrr
|
f3c97b10c3c5cf5a6276ecbfcdac621a
|
Security analysis and enhancements of 3GPP authentication and key agreement protocol
|
[
{
"docid": "b03d88449eaf4e393dc842340f6951ea",
"text": "Use of mobile personal computers in open networked environment is revolutionalising the way we use computers. Mobile networked computing is raising important information security and privacy issues. This paper is concerned with the design of authentication protocols for a mobile computing environment. The paper rst analyses the authenti-cation initiator protocols proposed by Beller,Chang and Yacobi (BCY) and the modiications considered by Carlsen and points out some weaknesses. The paper then suggests improvements to these protocols. The paper proposes secure end-to-end protocols between mobile users using both symmetric and public key based systems. These protocols enable mutual authentication and establish a shared secret key between mobile users. Furthermore, these protocols provide a certain degree of anonymity of the communicating users to be achieved visa -vis other system users.",
"title": ""
}
] |
[
{
"docid": "ab793dc03b8002a638a101abdccd1b38",
"text": "This paper describes a technique to obtain a time dilation or contraction of an audio signal. Different Computer Graphics applications can take advantage of this technique. In real-time networked VR applications, such as teleconference or games, audio might be transmited independently from the rest of the data, These different signals arrive asynchronously and need to be somehow resynchronized on the fly. In animation, it can help to automatically fit and merge pre-recorded sound samples to special timed events. It also makes it easier to accomplish special effects like lip-sync for dubbing or changing the voice of an animated character. Our technique tries to eliminate distortions by the replication of the original signal frequencies. Malvar wavelets are used to avoid clicking between segment transitions.",
"title": ""
},
{
"docid": "31461de346fb454f296495287600a74f",
"text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.",
"title": ""
},
{
"docid": "c43b77b56a6e2cb16a6b85815449529d",
"text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.",
"title": ""
},
{
"docid": "2f48b326aaa7b41a7ee347cedce344ed",
"text": "In this paper a new kind of quasi-quartic trigonometric polynomial base functions with two shape parameters λ and μ over the space Ω = span {1, sin t, cos t, sin2t, cos2t, sin3t, cos3t} is presented and the corresponding quasi-quartic trigonometric Bézier curves and surfaces are defined by the introduced base functions. Each curve segment is generated by five consecutive control points. The shape of the curve can be adjusted by altering the values of shape parameters while the control polygon is kept unchanged. These curves inherit most properties of the usual quartic Bézier curves in the polynomial space and they can be used as an efficient new model for geometric design in the fields of CAGD.",
"title": ""
},
{
"docid": "1212637c91d8c57299c922b6bde91ce8",
"text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.",
"title": ""
},
{
"docid": "95fbf262f9e673bd646ad7e02c5cbd53",
"text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.",
"title": ""
},
{
"docid": "b631b883e9d8a41f597d9b59d7e451fb",
"text": "The availability of highly accurate maps has become crucial due to the increasing importance of location-based mobile applications as well as autonomous vehicles. However, mapping roads is currently an expensive and humanintensive process. High-resolution aerial imagery provides a promising avenue to automatically infer a road network. Prior work uses convolutional neural networks (CNNs) to detect which pixels belong to a road (segmentation), and then uses complex post-processing heuristics to infer graph connectivity [4, 10]. We show that these segmentation methods have high error rates (poor precision) because noisy CNN outputs are difficult to correct. We propose a novel approach, Unthule, to construct highly accurate road maps from aerial images. In contrast to prior work, Unthule uses an incremental search process guided by a CNN-based decision function to derive the road network graph directly from the output of the CNN. We train the CNN to output the direction of roads traversing a supplied point in the aerial imagery, and then use this CNN to incrementally construct the graph. We compare our approach with a segmentation method on fifteen cities, and find that Unthule has a 45% lower error rate in identifying junctions across these cities.",
"title": ""
},
{
"docid": "f0c9db6cab187463162c8bba71ea011a",
"text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.",
"title": ""
},
{
"docid": "a89235677ad6ac3612983ca2bfeb584b",
"text": "INTRODUCTION\nNanomedicine is defined as the area using nanotechnology's concepts for the benefit of human beings, their health and well being. The field of nanotechnology opened new unsuspected fields of research a few years ago.\n\n\nAIM OF THE STUDY\nTo provide an overview of nanotechnology application areas that could affect care for psychiatric illnesses.\n\n\nMETHODS\nWe conducted a systematic review using the PRISMA criteria (preferred reporting items for systematic reviews and meta-analysis). Inclusion criteria were specified in advance: all studies describing the development of nanotechnology in psychiatry. The research paradigm was: \"(nanotechnology OR nanoparticles OR nanomedicine) AND (central nervous system)\" Articles were identified in three research bases, Medline (1966-present), Web of Science (1975-present) and Cochrane (all articles). The last search was carried out on April 2, 2012. Seventy-six items were included in this qualitative review.\n\n\nRESULTS\nThe main applications of nanotechnology in psychiatry are (i) pharmacology. There are two main difficulties in neuropharmacology. Drugs have to pass the blood brain barrier and then to be internalized by targeted cells. Nanoparticles could increase drugs' bioavailability and pharmacokinetics, especially improving safety and efficacy of psychotropic drugs. Liposomes, nanosomes, nanoparticle polymers, nanobubbles are some examples of this targeted drug delivery. Nanotechnologies could also add new pharmacological properties, like nanohells and dendrimers; (ii) living analysis. Nanotechnology provides technical assistance to in vivo imaging or metabolome analysis; (iii) central nervous system modeling. Research teams have modelized inorganic synapses and mimicked synaptic behavior, essential for further creation of artificial neural systems. Some nanoparticle assemblies present the same small world and free-scale network architecture as cortical neural networks. Nanotechnologies and quantum physics could be used to create models of artificial intelligence and mental illnesses.\n\n\nDISCUSSION\nEven if nanotechnologies are promising, their safety is still tricky and this must be kept in mind.\n\n\nCONCLUSION\nWe are not about to see a concrete application of nanomedicine in daily psychiatric practice. However, it seems essential that psychiatrists do not forsake this area of research the perspectives of which could be decisive in the field of mental illness.",
"title": ""
},
{
"docid": "82ef80d6257c5787dcf9201183735497",
"text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.",
"title": ""
},
{
"docid": "4513872c2240390dca8f4b704e606157",
"text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.",
"title": ""
},
{
"docid": "bf7a683ab9dde3e3d2cacf2a99828d4a",
"text": "Computing is transitioning from single-user devices to the Internet of Things (IoT), in which multiple users with complex social relationships interact with a single device. Currently deployed techniques fail to provide usable access-control specification or authentication in such settings. In this paper, we begin reenvisioning access control and authentication for the home IoT. We propose that access control focus on IoT capabilities (i. e., certain actions that devices can perform), rather than on a per-device granularity. In a 425-participant online user study, we find stark differences in participants’ desired access-control policies for different capabilities within a single device, as well as based on who is trying to use that capability. From these desired policies, we identify likely candidates for default policies. We also pinpoint necessary primitives for specifying more complex, yet desired, access-control policies. These primitives range from the time of day to the current location of users. Finally, we discuss the degree to which different authentication methods potentially support desired policies.",
"title": ""
},
{
"docid": "8c04758d9f1c44e007abf6d2727d4a4f",
"text": "The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1b6812231498387f158d24de8669dc27",
"text": "The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange. Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and \" No Warranty \" statements are included with all reproductions and derivative works. External use. This document may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other external and/or commercial use. a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. Abstract xiii 1 Introduction 1 1.1 Purpose and Structure of this Report 1 1.2 Background 1 1.3 The Strategic Planning Landscape 1",
"title": ""
},
{
"docid": "4cda02d9f5b5b16773b8cbffc54e91ca",
"text": "We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.",
"title": ""
},
{
"docid": "b1958bbb9348a05186da6db649490cdd",
"text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.",
"title": ""
},
{
"docid": "8715a3b9ac7487adbb6d58e8a45ceef6",
"text": "Before the computer age, authenticating a user was a relatively simple process. One person could authenticate another by visual recognition, interpersonal communication, or, more formally, mutually agreed upon authentication methods. With the onset of the computer age, authentication has become more complicated. Face-to-face visual authentication has largely dissipated, with computers and networks intervening. Sensitive information is exchanged daily between humans and computers, and from computer to computer. This complexity demands more formal protection methods; in short, authentication processes to manage our routine interactions with such machines and networks. Authentication is the process of positively verifying identity, be it that of a user, device, or entity in a computer system. Often authentication is the prerequisite to accessing system resources. Positive verification is accomplished by means of matching some indicator of identity, such as a shared secret prearranged at the time a person was authorized to use the system. The most familiar user authenticator in use today is the password. The secure sockets layer (SSL) is an example of machine to machine authentication. Human–machine authentication is known as user authentication and it consists of verifying the identity of a user: is this person really who she claims to be? User authentication is much less secure than machine authentication and is known as the Achilles’ heel of secure systems. This paper introduces various human authenticators and compares them based on security, convenience, and cost. The discussion is set in the context of a larger analysis of security issues, namely, measuring a system’s vulnerability to attack. The focus is kept on remote computer authentication. Authenticators can be categorized into three main types: secrets (what you know), tokens (what you have), and IDs (who you are). A password is a secret word, phrase, or personal identification number. Although passwords are ubiquitously used, they pose vulnerabilities, the biggest being that a short mnemonic password can be guessed or searched by an ambitious attacker, while a longer, random password is difficult for a person to remember. A token is a physical device used to aid authentication. Examples include bank cards and smart cards. A token can be an active device that yields one-time passcodes (time-synchronous or",
"title": ""
},
{
"docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9",
"text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.",
"title": ""
},
{
"docid": "5bd3cf8712d04b19226e53fca937e5a6",
"text": "This paper reviews the published studies on tourism demand modelling and forecasting since 2000. One of the key findings of this review is that the methods used in analysing and forecasting the demand for tourism have been more diverse than those identified by other review articles. In addition to the most popular time series and econometric models, a number of new techniques have emerged in the literature. However, as far as the forecasting accuracy is concerned, the study shows that there is no single model that consistently outperforms other models in all situations. Furthermore, this study identifies some new research directions, which include improving the forecasting accuracy through forecast combination; integrating both qualitative and quantitative forecasting approaches, tourism cycles and seasonality analysis, events’ impact assessment and risk forecasting.",
"title": ""
},
{
"docid": "f427dc8838618d0904cfe27200ac032d",
"text": "Sequential pattern mining has been studied extensively in data mining community. Most previous studies require the specification of a minimum support threshold to perform the mining. However, it is difficult for users to provide an appropriate threshold in practice. To overcome this difficulty, we propose an alternative task: mining topfrequent closed sequential patterns of length no less than , where is the desired number of closed sequential patterns to be mined, and is the minimum length of each pattern. We mine closed patterns since they are compact representations of frequent patterns. We developed an efficient algorithm, called TSP, which makes use of the length constraint and the properties of topclosed sequential patterns to perform dynamic supportraising and projected database-pruning. Our extensive performance study shows that TSP outperforms the closed sequential pattern mining algorithm even when the latter is running with the best tuned minimum support threshold.",
"title": ""
}
] |
scidocsrr
|
91b02ebcd000160014f99bfb8de326dd
|
Early Fusion of Camera and Lidar for robust road detection based on U-Net FCN
|
[
{
"docid": "36152b59aaaaa7e3a69ac57db17e44b8",
"text": "In this paper, a reliable road/obstacle detection with 3D point cloud for intelligent vehicle on a variety of challenging environments (undulated road and/or uphill/ downhill) is handled. For robust detection of road we propose the followings: 1) correction of 3D point cloud distorted by the motion of vehicle (high speed and heading up and down) incorporating vehicle posture information; 2) guideline for the best selection of the proper features such as gradient value, height average of neighboring node; 3) transformation of the road detection problem into a classification problem of different features; and 4) inference algorithm based on MRF with the loopy belief propagation for the area that the LIDAR does not cover. In experiments, we use a publicly available dataset as well as numerous scans acquired by the HDL-64E sensor mounted on experimental vehicle in inner city traffic scenes. The results show that the proposed method is more robust and reliable than the conventional approach based on the height value on the variety of challenging environment. Jaemin Byun Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: jaemin.byu@etri.re.kr, Ki-in Na Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: kina@etri.re.kr Beom-su Seo Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: bsseo@etri.re.kr MyungChan Roh Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: mcroh@etri.re.kr",
"title": ""
},
{
"docid": "378dcab60812075f58534d8dca1c5f33",
"text": "Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in real-time. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.",
"title": ""
}
] |
[
{
"docid": "fb05042ac52f448d9c7d3f820df4b790",
"text": "Protein gamma-turn prediction is useful in protein function studies and experimental design. Several methods for gamma-turn prediction have been developed, but the results were unsatisfactory with Matthew correlation coefficients (MCC) around 0.2–0.4. Hence, it is worthwhile exploring new methods for the prediction. A cutting-edge deep neural network, named Capsule Network (CapsuleNet), provides a new opportunity for gamma-turn prediction. Even when the number of input samples is relatively small, the capsules from CapsuleNet are effective to extract high-level features for classification tasks. Here, we propose a deep inception capsule network for gamma-turn prediction. Its performance on the gamma-turn benchmark GT320 achieved an MCC of 0.45, which significantly outperformed the previous best method with an MCC of 0.38. This is the first gamma-turn prediction method utilizing deep neural networks. Also, to our knowledge, it is the first published bioinformatics application utilizing capsule network, which will provide a useful example for the community. Executable and source code can be download at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldGammaTurn/download.html.",
"title": ""
},
{
"docid": "ef7069ddd470608196bbeef5e8fda49d",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nNigella sativa (N. sativa) L. (Ranunculaceae), well known as black cumin, has been used as a herbal medicine that has a rich historical background. It has been traditionally and clinically used in the treatment of several diseases. Many reviews have investigated this valuable plant, but none of them focused on its clinical effects. Therefore, the aim of the present review is to provide a comprehensive report of clinical studies on N. sativa and some of its constituents.\n\n\nMATERIALS AND METHODS\nStudies on the clinical effects of N. sativa and its main constituent, thymoquinone, which were published between 1979 and 2015, were searched using various databases.\n\n\nRESULTS AND DISCUSSION\nDuring the last three decades, several in vivo and in vitro animal studies revealed the pharmacological properties of the plant, including its antioxidant, antibacterial, antiproliferative, proapoptotic, anti-inflammatory, and antiepileptic properties, and its effect on improvement in atherogenesis, endothelial dysfunction, glucose metabolism, lipid profile dysfunction, and prevention of hippocampus pyramidal cell loss. In clinical studies, antimicrobial, antioxidant, anti-inflammatory, antitumor, and antidiabetic properties as well as therapeutic effects on metabolic syndrome, and gastrointestinal, neuronal, cardiovascular, respiratory, urinary, and reproductive disorders were found in N. sativa and its constituents.\n\n\nCONCLUSION\nExtensive basic and clinical studies on N. sativa seed powder, oil, extracts (aqueous, ethanolic, and methanolic), and thymoquinone showed valuable therapeutic effects on different disorders with a wide range of safe doses. However, there were some confounding factors in the reviewed clinical trials, and a few of them presented data about the phytochemical composition of the plant. Therefore, a more standard clinical trial with N. sativa supplementation is needed for the plant to be used as an inexpensive potential biological adjuvant therapy.",
"title": ""
},
{
"docid": "7a1e32dc80550704207c5e0c7e73da26",
"text": "Stock markets are affected by many uncertainties and interrelated economic and political factors at both local and global levels. The key to successful stock market forecasting is achieving best results with minimum required input data. To determine the set of relevant factors for making accurate predictions is a complicated task and so regular stock market analysis is very essential. More specifically, the stock market’s movements are analyzed and predicted in order to retrieve knowledge that could guide investors on when to buy and sell. It will also help the investor to make money through his investment in the stock market. This paper surveys large number of resources from research papers, web-sources, company reports and other available sources.",
"title": ""
},
{
"docid": "17f685f61fba724311a86267cdf33871",
"text": "The main advantage of using the Hough Transform to detect ellipses is its robustness against missing data points. However, the storage and computational requirements of the Hough Transform preclude practical applications. Although there are many modifications to the Hough Transform, these modifications still demand significant storage requirement. In this paper, we present a novel ellipse detection algorithm which retains the original advantages of the Hough Transform while minimizing the storage and computation complexity. More specifically, we use an accumulator that is only one dimensional. As such, our algorithm is more effective in terms of storage requirement. In addition, our algorithm can be easily parallelized to achieve good execution time. Experimental results on both synthetic and real images demonstrate the robustness and effectiveness of our algorithm in which both complete and incomplete ellipses can be extracted.",
"title": ""
},
{
"docid": "ec89eb1388055a1c81eb26bf2e2d1316",
"text": "There is growing interest across a range of disciplines in the relationship between pets and health, with a range of therapeutic, physiological, psychological and psychosocial benefits now documented. While much of the literature has focused on the individual benefits of pet ownership, this study considered the potential health benefits that might accrue to the broader community, as encapsulated in the construct of social capital. A random survey of 339 adult residents from Perth, Western Australia were selected from three suburbs and interviewed by telephone. Pet ownership was found to be positively associated with some forms of social contact and interaction, and with perceptions of neighbourhood friendliness. After adjustment for demographic variables, pet owners scored higher on social capital and civic engagement scales. The results suggest that pet ownership provides potential opportunities for interactions between neighbours and that further research in this area is warranted. Social capital is another potential mechanism by which pets exert an influence on human health.",
"title": ""
},
{
"docid": "13d5011f3d6c1997e3c44b3f03cf2017",
"text": "Reinforcement learning with appropriately designed reward signal could be used to solve many sequential learning problems. However, in practice, the reinforcement learning algorithms could be broken in unexpected, counterintuitive ways. One of the failure modes is reward hacking which usually happens when a reward function makes the agent obtain high return in an unexpected way. This unexpected way may subvert the designer’s intentions and lead to accidents during training. In this paper, a new multi-step state-action value algorithm is proposed to solve the problem of reward hacking. Unlike traditional algorithms, the proposed method uses a new return function, which alters the discount of future rewards and no longer stresses the immediate reward as the main influence when selecting the current state action. The performance of the proposed method is evaluated on two games, Mappy and Mountain Car. The empirical results demonstrate that the proposed method can alleviate the negative impact of reward hacking and greatly improve the performance of reinforcement learning algorithm. Moreover, the results illustrate that the proposed method could also be applied to the continuous state space problem successfully.",
"title": ""
},
{
"docid": "989cdc80521e1c8761f733ad3ed49d79",
"text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.",
"title": ""
},
{
"docid": "467637b1f55d4673d0ddd5322a130979",
"text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.",
"title": ""
},
{
"docid": "ea5431e8f2f1e197988cf1b52ee685ce",
"text": "Prunus mume (mei), which was domesticated in China more than 3,000 years ago as ornamental plant and fruit, is one of the first genomes among Prunus subfamilies of Rosaceae been sequenced. Here, we assemble a 280M genome by combining 101-fold next-generation sequencing and optical mapping data. We further anchor 83.9% of scaffolds to eight chromosomes with genetic map constructed by restriction-site-associated DNA sequencing. Combining P. mume genome with available data, we succeed in reconstructing nine ancestral chromosomes of Rosaceae family, as well as depicting chromosome fusion, fission and duplication history in three major subfamilies. We sequence the transcriptome of various tissues and perform genome-wide analysis to reveal the characteristics of P. mume, including its regulation of early blooming in endodormancy, immune response against bacterial infection and biosynthesis of flower scent. The P. mume genome sequence adds to our understanding of Rosaceae evolution and provides important data for improvement of fruit trees.",
"title": ""
},
{
"docid": "4a761bed54487cb9c34fc0ff27883944",
"text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.",
"title": ""
},
{
"docid": "ece81717e6cdab30cfb60d705bc4fc5e",
"text": "It is well established that autism spectrum disorders (ASD) have a strong genetic component; however, for at least 70% of cases, the underlying genetic cause is unknown. Under the hypothesis that de novo mutations underlie a substantial fraction of the risk for developing ASD in families with no previous history of ASD or related phenotypes—so-called sporadic or simplex families—we sequenced all coding regions of the genome (the exome) for parent–child trios exhibiting sporadic ASD, including 189 new trios and 20 that were previously reported. Additionally, we also sequenced the exomes of 50 unaffected siblings corresponding to these new (n = 31) and previously reported trios (n = 19), for a total of 677 individual exomes from 209 families. Here we show that de novo point mutations are overwhelmingly paternal in origin (4:1 bias) and positively correlated with paternal age, consistent with the modest increased risk for children of older fathers to develop ASD. Moreover, 39% (49 of 126) of the most severe or disruptive de novo mutations map to a highly interconnected β-catenin/chromatin remodelling protein network ranked significantly for autism candidate genes. In proband exomes, recurrent protein-altering mutations were observed in two genes: CHD8 and NTNG1. Mutation screening of six candidate genes in 1,703 ASD probands identified additional de novo, protein-altering mutations in GRIN2B, LAMC3 and SCN1A. Combined with copy number variant (CNV) data, these results indicate extreme locus heterogeneity but also provide a target for future discovery, diagnostics and therapeutics.",
"title": ""
},
{
"docid": "acf514a4aa34487121cc853e55ceaed4",
"text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.",
"title": ""
},
{
"docid": "1adacc7dc452e27024756c36eecb8cae",
"text": "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "9837e331cf1c2a5bb0cee92e4ae44ca5",
"text": "Isocitrate dehydrogenase 2 (IDH2) is located in the mitochondrial matrix. IDH2 acts in the forward Krebs cycle as an NADP(+)-consuming enzyme, providing NADPH for maintenance of the reduced glutathione and peroxiredoxin systems and for self-maintenance by reactivation of cystine-inactivated IDH2 by glutaredoxin 2. In highly respiring cells, the resulting NAD(+) accumulation then induces sirtuin-3-mediated activating IDH2 deacetylation, thus increasing its protective function. Reductive carboxylation of 2-oxoglutarate by IDH2 (in the reverse Krebs cycle direction), which consumes NADPH, may follow glutaminolysis of glutamine to 2-oxoglutarate in cancer cells. When the reverse aconitase reaction and citrate efflux are added, this overall \"anoxic\" glutaminolysis mode may help highly malignant tumors survive aglycemia during hypoxia. Intermittent glycolysis would hypothetically be required to provide ATP. When oxidative phosphorylation is dormant, this mode causes substantial oxidative stress. Arg172 mutants of human IDH2-frequently found with similar mutants of cytosolic IDH1 in grade 2 and 3 gliomas, secondary glioblastomas, and acute myeloid leukemia-catalyze reductive carboxylation of 2-oxoglutarate and reduction to D-2-hydroxyglutarate, which strengthens the neoplastic phenotype by competitive inhibition of histone demethylation and 5-methylcytosine hydroxylation, leading to genome-wide histone and DNA methylation alternations. D-2-hydroxyglutarate also interferes with proline hydroxylation and thus may stabilize hypoxia-induced factor α.",
"title": ""
},
{
"docid": "116294113ff20558d3bcb297950f6d63",
"text": "This paper aims to analyze the influence of a Halbach array by using a semi analytical design optimization approach on a novel electrical machine design with slotless air gap winding. The useable magnetic flux density caused by the Halbach array magnetization is studied and compared to conventional radial magnetization systems. First, several discrete magnetic flux densities are analyzed for an infinitesimal wire size in an air gap range from 0.1 mm to 5 mm by the finite element method in Ansys Maxwell. Fourier analysis is used to approximate continuous functions for each magnetic flux density characteristic for each air gap height. Then, using a six-step commutation control, the magnetic flux acting on a certain phase geometry is considered for a parametric machine model. The design optimization approach utilizes the design freedom of the magnetic flux density shape in air gap as well as the heights and depths of all magnetic circuit components, which are stator and rotor cores, permanent magnets, air gap, and air gap winding. Use of a nonlinear optimization formulation, allows for fast and precise analytical calculation of objective function. In this way the influence of both magnetizations on Pareto optimal machine design sets, when mass and efficiency are weighted, are compared. Other design requirements, such as torque, current, air gap and wire height, are considered via constraints on this optimization. Finally, an optimal motor design study for the Halbach array magnetization pattern is compared to the conventional radial magnetization. As a reference design, an existing 15-inch rim wheel-hub motor with air gap winding is used.",
"title": ""
},
{
"docid": "eb8f0a30d222b89e5fda3ea1d83ea525",
"text": "We present a method which exploits automatically generated scientific discourse annotations to create a content model for the summarisation of scientific articles. Full papers are first automatically annotated using the CoreSC scheme, which captures 11 contentbased concepts such as Hypothesis, Result, Conclusion etc at the sentence level. A content model which follows the sequence of CoreSC categories observed in abstracts is used to provide the skeleton of the summary, making a distinction between dependent and independent categories. Summary creation is also guided by the distribution of CoreSC categories found in the full articles, in order to adequately represent the article content. Finally, we demonstrate the usefulness of the summaries by evaluating them in a complex question answering task. Results are very encouraging as summaries of papers from automatically obtained CoreSCs enable experts to answer 66% of complex content-related questions designed on the basis of paper abstracts. The questions were answered with a precision of 75%, where the upper bound for human summaries (abstracts) was 95%.",
"title": ""
},
{
"docid": "3c29c0a3e8ec6292f05c7907436b5e9a",
"text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.",
"title": ""
},
{
"docid": "5445892bdf8478cfacac9d599dead1f9",
"text": "The problem of determining feature correspondences across multiple views is considered. The term \"true multi-image\" matching is introduced to describe techniques that make full and efficient use of the geometric relationships between multiple images and the scene. A true multi-image technique must generalize to any number of images, be of linear algorithmic complexity in the number of images, and use all the images in an equal manner. A new space-sweep approach to true multi-image matching is presented that simultaneously determines 2D feature correspondences and the 3D positions of feature points in the scene. The method is illustrated on a seven-image matching example from the aerial im-",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "74fd21dccc9e883349979c8292c5f450",
"text": "Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were collected heuristically and tend to have low quality. In this paper, we investigate a new problem of systematically mining question-code pairs from Stack Overflow (in contrast to heuristically collecting them). It is formulated as predicting whether or not a code snippet is a standalone solution to a question. We propose a novel Bi-View Hierarchical Neural Network which can capture both the programming content and the textual context of a code snippet (i.e., two views) to make a prediction. On two manually annotated datasets in Python and SQL domain, our framework substantially outperforms heuristic methods with at least 15% higher F1 and accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs), the largest dataset to date of ∼148K Python and ∼120K SQL question-code pairs, automatically mined from SO using our framework. Under various case studies, we demonstrate that StaQC can greatly help develop data-hungry models for associating natural language with programming language1.",
"title": ""
}
] |
scidocsrr
|
5b67442ae83eb4edbcdfb6851947e8e9
|
Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification
|
[
{
"docid": "84ca7dc9cac79fe14ea2061919c44a05",
"text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.",
"title": ""
}
] |
[
{
"docid": "5ce6bac4ec1f916c1ebab9da09816c0e",
"text": "High-performance parallel computing architectures are increasingly based on multi-core processors. While current commercially available processors are at 8 and 16 cores, technological and power constraints are limiting the performance growth of the cores and are resulting in architectures with much higher core counts, such as the experimental many-core Intel Single-chip Cloud Computer (SCC) platform. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.\n In this paper, we first investigate the power behavior of scientific Partitioned Global Address Space (PGAS) application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layer approach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of insights that can be used to support similar power management for PGAS applications on other many-core platforms.",
"title": ""
},
{
"docid": "799c839fad857c1ba90a9905f1b1d544",
"text": "Much of the research published in the property discipline consists of work utilising quantitative methods. While research gained using quantitative methods, if appropriately designed and rigorous, leads to results which are typically generalisable and quantifiable, it does not allow for a rich and in-depth understanding of a phenomenon. This is especially so if a researcher’s aim is to uncover the issues or factors underlying that phenomenon. Such an aim would require using a qualitative research methodology, and possibly an interpretive as opposed to a positivist theoretical perspective. The purpose of this paper is to provide a general overview of qualitative methodologies with the aim of encouraging a broadening of methodological approaches to overcome the positivist methodological bias which has the potential of inhibiting property behavioural research.",
"title": ""
},
{
"docid": "8503b51197d8242c4ec242f7190c2405",
"text": "We provide a state-of-the-art explication of application security and software protection. The relationship between application security and data security, network security, and software security is discussed. Three simplified threat models for software are sketched. To better understand what attacks must be defended against in order to improve software security, we survey software attack approaches and attack tools. A simplified software security view of a software application is given, and along with illustrative examples, used to motivate a partial list of software security requirements for applications.",
"title": ""
},
{
"docid": "a679d37b88485cf71569f9aeefefbac5",
"text": "Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial. Title and Abstract in German Inkrementelle Sprachverarbeitung: Herausforderungen, Strategien und Evaluation Inkrementalität ist allgegenwärtig in Mensch-Mensch-Interaktiton und hilfreich für MenschComputer-Interaktion. In verschiedenen Teilen der NLP-Community wird an Inkrementalität geforscht, zumeist fokussiert auf eine konkrete Aufgabe, obwohl sich inkrementellen Systemen domänenübergreifend ähnliche Herausforderungen stellen. In diesem Überblick trage ich Ansätze zusammen, kategorisiere sie und stelle Ähnlichkeiten und Unterschiede in Berechnung und Daten sowie nötige Abwägungen vor. Ein Fokus liegt auf der Evaluierung inkrementeller Systeme, da Standardmetriken of nicht in der Lage sind, die inkrementellen Eigenschaften eines Systems einzufangen und passende Evaluationsschemata zu entwickeln nicht einfach ist.",
"title": ""
},
{
"docid": "9ca71bbeb4643a6a347050002f1317f5",
"text": "In modern society, we are increasingly disconnected from natural light/dark cycles and beset by round-the-clock exposure to artificial light. Light has powerful effects on physical and mental health, in part via the circadian system, and thus the timing of light exposure dictates whether it is helpful or harmful. In their compelling paper, Obayashi et al. (Am J Epidemiol. 2018;187(3):427-434.) offer evidence that light at night can prospectively predict an elevated incidence of depressive symptoms in older adults. Strengths of the study include the longitudinal design and direct, objective assessment of light levels, as well as accounting for multiple plausible confounders during analyses. Follow-up studies should address the study's limitations, including reliance on a global self-report of sleep quality and a 2-night assessment of light exposure that may not reliably represent typical light exposure. In addition, experimental studies including physiological circadian measures will be necessary to determine whether the light effects on depression are mediated through the circadian system or are so-called \"direct\" effects of light. In any case, these exciting findings could inform novel approaches to preventing depressive disorders in older adults.",
"title": ""
},
{
"docid": "b9fb60fadf13304b46f87fda305f118e",
"text": "Coordinated cyberattacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm in the power system state estimation process. These unobservable attacks present a potentially serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacks [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of line power meters is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known-secure phasor measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyberattacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyberattacks.",
"title": ""
},
{
"docid": "69eceabd9967260cbdec56d02bcafd83",
"text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.",
"title": ""
},
{
"docid": "d0f71092df2eab53e7f32eff1cb7af2e",
"text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.",
"title": ""
},
{
"docid": "1d632b05f8b3ff5300a2a3ece8d05376",
"text": "This study focuses on feature selection in paralinguistic analysis and presents recently developed supervised and unsupervised methods for feature subset selection and feature ranking. Using the standard k-nearest-neighbors (kNN) rule as the classification algorithm, the feature selection methods are evaluated individually and in different combinations in seven paralinguistic speaker trait classification tasks. In each analyzed data set, the overall number of features highly exceeds the number of data points available for training and evaluation, making a well-generalizing feature selection process extremely difficult. The performance of feature sets on the feature selection data is observed to be a poor indicator of their performance on unseen data. The studied feature selection methods clearly outperform a standard greedy hill-climbing selection algorithm by being more robust against overfitting. When the selection methods are suitably combined with each other, the performance in the classification task can be further improved. In general, it is shown that the use of automatic feature selection in paralinguistic analysis can be used to reduce the overall number of features to a fraction of the original feature set size while still achieving a comparable or even better performance than baseline support vector machine or random forest classifiers using the full feature set. The most typically selected features for recognition of speaker likability, intelligibility and five personality traits are also reported.",
"title": ""
},
{
"docid": "390505bd6f04e899a15c64c26beac606",
"text": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.",
"title": ""
},
{
"docid": "799bc245ecfabf59416432ab62fe9320",
"text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
},
{
"docid": "503c9c4d0d8f94d3e7a9ea8ee496e08b",
"text": "Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization.",
"title": ""
},
{
"docid": "6716302b3168098a52f56b6aa7b82e94",
"text": "Consumers are increasingly relying on web-based social content, such as product reviews, prior to making to a purchase. Recent surveys in the Retail Industry confirm that social content is indeed the #1 aid in a buying decision. Currently, accessing or adding to this valuable web-based social content repository is mostly limited to computers far removed from the site of the shopping experience itself. We present a mobile Augmented Reality application, which extends such social content from the computer monitor into the physical world through mobile phones, providing consumers with in situ information on products right when and where they need to make buying decisions.",
"title": ""
},
{
"docid": "e8b0536f5d749b5f6f5651fe69debbe1",
"text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.",
"title": ""
},
{
"docid": "db42b2c5b9894943c3ba05fad07ee2f9",
"text": "This paper deals principally with the grid connection problem of a kite-based system, named the “Kite Generator System (KGS).” It presents a control scheme of a closed-orbit KGS, which is a wind power system with a relaxation cycle. Such a system consists of a kite with its orientation mechanism and a power transformation system that connects the previous part to the electric grid. Starting from a given closed orbit, the optimal tether's length rate variation (the kite's tether radial velocity) and the optimal orbit's period are found. The trajectory-tracking problem is not considered in this paper; only the kite's tether radial velocity is controlled via the electric machine rotation velocity. The power transformation system transforms the mechanical energy generated by the kite into electrical energy that can be transferred to the grid. A Matlab/simulink model of the KGS is employed to observe its behavior, and to insure the control of its mechanical and electrical variables. In order to improve the KGS's efficiency in case of slow changes of wind speed, a maximum power point tracking (MPPT) algorithm is proposed.",
"title": ""
},
{
"docid": "cdc1e3b629659bf342def1f262d7aa0b",
"text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.",
"title": ""
},
{
"docid": "074567500751d814eef4ba979dc3cc8d",
"text": "Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems,",
"title": ""
},
{
"docid": "7d7d8d521cc098a7672cbe2e387dde58",
"text": "AIM\nThe purpose of this review is to represent acids that can be used as surface etchant before adhesive luting of ceramic restorations, placement of orthodontic brackets or repair of chipped porcelain restorations. Chemical reactions, application protocol, and etching effect are presented as well.\n\n\nSTUDY SELECTION\nAvailable scientific articles published in PubMed and Scopus literature databases, scientific reports and manufacturers' instructions and product information from internet websites, written in English, using following search terms: \"acid etching, ceramic surface treatment, hydrofluoric acid, acidulated phosphate fluoride, ammonium hydrogen bifluoride\", have been reviewed.\n\n\nRESULTS\nThere are several acids with fluoride ion in their composition that can be used as ceramic surface etchants. The etching effect depends on the acid type and its concentration, etching time, as well as ceramic type. The most effective etching pattern is achieved when using hydrofluoric acid; the numerous micropores and channels of different sizes, honeycomb-like appearance, extruded crystals or scattered irregular ceramic particles, depending on the ceramic type, have been detected on the etched surfaces.\n\n\nCONCLUSION\nAcid etching of the bonding surface of glass - ceramic restorations is considered as the most effective treatment method that provides a reliable bond with composite cement. Selective removing of the glassy matrix of silicate ceramics results in a micromorphological three-dimensional porous surface that allows micromechanical interlocking of the luting composite.",
"title": ""
},
{
"docid": "c36dac0c410570e84bf8634b32a0cac3",
"text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.",
"title": ""
}
] |
scidocsrr
|
2a927ff647178e776b0914fd7738d341
|
Collaborative Departure Queue Management An Example of Airport Collaborative Decision Making in the United States
|
[
{
"docid": "4e2bfd87acf1287f36694634a6111b3f",
"text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.",
"title": ""
}
] |
[
{
"docid": "614174e5e1dffe9824d7ef8fae6fb499",
"text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.",
"title": ""
},
{
"docid": "31e3fddcaeb7e4984ba140cb30ff49bf",
"text": "We show that a maximum-weight triangle in an undirected graph with n vertices and real weights assigned to vertices can be found in time O(nω + n2+o(1)), where ω is the exponent of the fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(n2.376). Our algorithm substantially improves the previous time-bounds for this problem, and its asymptotic time complexity matches that of the fastest known algorithm for finding any triangle (not necessarily a maximum-weight one) in a graph. We can extend our algorithm to improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph. We can find a maximum-weight triangle in a vertex-weighted graph with m edges in asymptotic time required by the fastest algorithm for finding any triangle in a graph with m edges, i.e., in time O(m1.41). Our algorithms for a maximum-weight fixed subgraph (in particular any clique of constant size) are asymptotically as fast as the fastest known algorithms for a fixed subgraph.",
"title": ""
},
{
"docid": "e9e7cb42ed686ace9e9785fafd3c72f8",
"text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).",
"title": ""
},
{
"docid": "4a164ec21fb69e7db5c90467c6f6af17",
"text": "Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival.",
"title": ""
},
{
"docid": "ce9b3c56208fbfb555be55acbf9f142e",
"text": "Opinion mining and sentiment analysis is rapidly growing area. There are numerous e-commerce sites available on internet which provides options to users to give feedback about specific product. These feedbacks are very much helpful to both the individuals, who are willing to buy that product and the organizations. An accurate method for predicting sentiments could enable us, to extract opinions from the internet and predict customer’s preferences. There are various algorithms available for opinion mining. Before applying any algorithm for polarity detection, pre-processing on feedback is carried out. From these pre-processed reviews opinion words and object on which opinion is generated are extracted and any opinion mining technique is applied to find the polarity of the review. Opinion mining has three levels of granularities: Document level, Sentence level and Aspect level. In this paper various algorithms for sentiment analysis are studied and challenges and applications appear in this field are discussed.",
"title": ""
},
{
"docid": "dbb7520f2f88005b70e0793c74b7b296",
"text": "Spoken language understanding and dialog management have emerged as key technologies in interacting with personal digital assistants (PDAs). The coverage, complexity, and the scale of PDAs are much larger than previous conversational understanding systems. As such, new problems arise. In this paper, we provide an overview of the language understanding and dialog management capabilities of PDAs, focusing particularly on Cortana, Microsoft's PDA. We explain the system architecture for language understanding and dialog management for our PDA, indicate how it differs with prior state-of-the-art systems, and describe key components. We also report a set of experiments detailing system performance on a variety of scenarios and tasks. We describe how the quality of user experiences are measured end-to-end and also discuss open issues.",
"title": ""
},
{
"docid": "72cc9333577fb255c97f137c5d19fd54",
"text": "The purpose of this study was to provide insight on attitudes towards Facebook advertising. In order to figure out the attitudes towards Facebook advertising, a snowball survey was executed among Facebook users by spreading a link to the survey. This study was quantitative study but the results of the study were interpreted in qualitative way. This research was executed with the help of factor analysis and cluster analysis, after which Chisquare test was used. This research expected that the result of the survey would lead in to two different groups with negative and positive attitudes. Factor analysis was used to find relations between variables that the survey data generated. The factor analysis resulted in 12 factors that were put in a cluster analysis to find different kinds of groups. Surprisingly the cluster analysis enabled the finding of three groups with different interests and different attitudes towards Facebook advertising. These clusters were analyzed and compared. One group was clearly negative, tending to block and avoid advertisements. Second group was with more neutral attitude towards advertising, and more carefree internet using. They did not have blocking software in use and they like to participate in activities more often. The third group had positive attitude towards advertising. The result of this study can be used to help companies better plan their Facebook advertising according to groups. It also reminds about the complexity of people and their attitudes; not everything suits everybody.",
"title": ""
},
{
"docid": "bf5f1cdcc71a76f33ad516aa165ffc41",
"text": "The content protection of digital medical images is getting more importance, especially with the advance of computerized systems and communication networks which allows providing high quality images, sending and receiving such data in a realtime manner. Medical concernslead healthcare organizations to encrypt every patient data, such as images before transferring the data over computer networks. Therefore, designing and developing qualified encryption algorithms is quite important for contemporary medicine.Medical image encryption algorithmstry to convert a digital image to another image data format which would bedefinitely hard to recognize. In this paper, we technically review image encryption methods for medical images. We do hope the present work can highlight the most recent contributions in the research area, and provide wellorganized information to the medical image community.",
"title": ""
},
{
"docid": "45be2fbf427a3ea954a61cfd5150db90",
"text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.",
"title": ""
},
{
"docid": "6c8a864355c06fa42bad9f81100f627b",
"text": "There is rich knowledge encoded in online web data. For example, punctuation and entity tags in Wikipedia data define some word boundaries in a sentence. In this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised Chinese word segmentation. The basic idea of partial-label learning is to optimize a cost function that marginalizes the probability mass in the constrained space that encodes this knowledge. By integrating some domain adaptation techniques, such as EasyAdapt, our result reaches an F-measure of 95.98% on the CTB-6 corpus, a significant improvement from both the supervised baseline and a previous proposed approach, namely constrained decode.",
"title": ""
},
{
"docid": "24c62c2660ece8c0c724f745cb050964",
"text": "Face detection is a classical problem in computer vision. It is still a difficult task due to many nuisances that naturally occur in the wild. In this paper, we propose a multi-scale fully convolutional network for face detection. To reduce computation, the intermediate convolutional feature maps (conv) are shared by every scale model. We up-sample and down-sample the final conv map to approximate K levels of a feature pyramid, leading to a wide range of face scales that can be detected. At each feature pyramid level, a FCN is trained end-to-end to deal with faces in a small range of scale change. Because of the up-sampling, our method can detect very small faces (10×10 pixels). We test our MS-FCN detector on four public face detection datasets, including FDDB, WIDER FACE, AFW and PASCAL FACE. Extensive experiments show that it outperforms state-of-the-art methods. Also, MS-FCN runs at 23 FPS on a GPU for images of size 640×480 with no assumption on the minimum detectable face size.",
"title": ""
},
{
"docid": "d56ff4b194c123b19a335e00b38ea761",
"text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.",
"title": ""
},
{
"docid": "ab0b8cea87678dd7b5ea5057fbdb0ac1",
"text": "Data collection is a crucial operation in wireless sensor networks. The design of data collection schemes is challenging due to the limited energy supply and the hot spot problem. Leveraging empirical observations that sensory data possess strong spatiotemporal compressibility, this paper proposes a novel compressive data collection scheme for wireless sensor networks. We adopt a power-law decaying data model verified by real data sets and then propose a random projection-based estimation algorithm for this data model. Our scheme requires fewer compressed measurements, thus greatly reduces the energy consumption. It allows simple routing strategy without much computation and control overheads, which leads to strong robustness in practical applications. Analytically, we prove that it achieves the optimal estimation error bound. Evaluations on real data sets (from the GreenOrbs, IntelLab and NBDC-CTD projects) show that compared with existing approaches, this new scheme prolongs the network lifetime by 1.5X to 2X for estimation error 5-20 percent.",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
},
{
"docid": "b475ddb8c3ff32dfea5f51d054680bc3",
"text": "An increasing price and demand for natural gas has made it possible to explore remote gas fields. Traditional offshore production platforms for natural gas have been exporting the partially processed natural gas to shore, where it is further processed to permit consumption by end-users. Such an approach is possible where the gas field is located within a reasonable distance from shore or from an existing gas pipeline network. However, much of the world’s gas reserves are found in remote offshore fields where transport via a pipeline is not feasible or is uneconomic to install and therefore, to date, has not been possible to explore. The development of floating production platforms and, on the receiving end, regasification platforms, have increased the possibilities to explore these fields and transport the liquefied gas in a more efficient form, i.e. liquefied natural gas (LNG), to the end user who in turn can readily import the gas. Floating production platforms and regasification platforms, collectively referred to as FLNG, imply a blend of technology from land-based LNG industry, offshore oil and gas industry and marine transport technology. Regulations and rules based on experience from these applications could become too conservative or not conservative enough when applied to a FLNG unit. Alignment with rules for conventional LNG carriers would be an advantage since this would increase the transparency and possibility for standardization in the building of floating LNG production vessels. The objective of this study is to identify the risks relevant to FLNG. The risks are compared to conventional LNG carriers and whether or not regulatory alignment possibilities exist. To identify the risks, a risk analysis was performed based on the principles of formal safety assessment methodology. To propose regulatory alignment possibilities, the risks found were also evaluated against the existing rules and regulations of Det Norske Veritas. The conclusion of the study is that the largest risk-contributing factor on an FLNG is the presence of processing, liquefaction or regasification equipment and for an LNG carrier it is collision, grounding and contact accidents. Experience from oil FPSOs could be used in the design of LNG FPSOs, and attention needs to be drawn to the additional requirements due to processing and storage of cryogenic liquid on board. FSRUs may follow either an approach for offshore rules or, if intended to follow a regular docking scheme, follow an approach for ship rules with additional issues addressed in classification notes.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <korym@ualberta.ca>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "79b8588f7c9b6dc87d90ddbd2e75a7d5",
"text": "BACKGROUND\nDespite the progress in reducing malaria infections and related deaths, the disease remains a major global public health problem. The problem is among the top five leading causes of outpatient visits in Dembia district of the northwest Ethiopia. Therefore, this study aimed to assess the determinants of malaria infections in the district.\n\n\nMETHODS\nAn institution-based case-control study was conducted in Dembia district from October to November 2016. Out of the ten health centers in the district, four were randomly selected for the study in which 370 participants (185 cases and 185 controls) were enrolled. Data were collected using a pretested structured questionnaire. Factors associated with malaria infections were determined using logistic regression analysis. Odds ratio with 95% CI was used as a measure of association, and variables with a p-value of ≤0.05 were considered as statistically significant.\n\n\nRESULTS\nThe median age of all participants was 26 years, while that of cases and controls was 22 and 30 with a range of 1 to 80 and 2 to 71, respectively. In the multivariable logistic regression, over 15 years of age adjusted odds ratio(AOR) and confidence interval (CI) of (AOR = 18; 95% CI: 2.1, 161.5), being male (AOR = 2.2; 95% CI: 1.2, 3.9), outdoor activities at night (AOR = 5.7; 95% CI: 2.5, 12.7), bed net sharing (AOR = 3.9; 95% CI: 2.0, 7.7), and proximity to stagnant water sources (AOR = 2.7; 95% CI: 1.3, 5.4) were independent predictors.\n\n\nCONCLUSION\nBeing in over 15 years of age group, male gender, night time activity, bed net sharing and proximity to stagnant water sources were determinant factors of malaria infection in Dembia district. Additional interventions and strategies which focus on men, outdoor work at night, household net utilization, and nearby stagnant water sources are essential to reduce malaria infections in the area.",
"title": ""
},
{
"docid": "ea308cdcedd9261fb9871cf84899b63f",
"text": "Purpose To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric enabled security project called Bio Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank’s access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.",
"title": ""
},
{
"docid": "3cd9aeb83ba379763c42f0c20a53851c",
"text": "One of the main problems in many big and crowded cities is finding parking spaces for vehicles. With IoT technology and mobile applications, in this paper, we propose a design and development of a real smart parking system that can provide more than just information about vacant spaces but also help user to locate the space where the vehicle can be parked in order to reduce traffics in the parking area. Moreover, we use computer vision to detect vehicle plate number in order to monitor the vehicles in the parking area for enhancing security and also to help user find his/her car when he/she forgets where the car is parked. In our system, we also design the payment process using mobile payment in order to reduce time and remove bottleneck of the payment process at the entry/exit gate of the parking area.",
"title": ""
},
{
"docid": "b02ebfa85f0948295b401152c0190d74",
"text": "SAGE has had a remarkable impact at Microsoft.",
"title": ""
}
] |
scidocsrr
|
adc94cd673f25c2caf8376617399ffe4
|
HyperQA : Hyperbolic Embeddings for Fast and E icient Ranking of estion Answer Pairs
|
[
{
"docid": "a52d0679863b148b4fd6e112cd8b5596",
"text": "Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space – or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "87284302ea96b36c769a4d2a05295a32",
"text": "Retrieving similar questions is very important in community-based question answering. A major challenge is the lexical gap in sentence matching. In this paper, we propose a convolutional neural tensor network architecture to encode the sentences in semantic space and model their interactions with a tensor layer. Our model integrates sentence modeling and semantic matching into a single model, which can not only capture the useful information with convolutional and pooling layers, but also learn the matching metrics between the question and its answer. Besides, our model is a general architecture, with no need for the other knowledge such as lexical or syntactic analysis. The experimental results shows that our method outperforms the other methods on two matching tasks.",
"title": ""
},
{
"docid": "340aa5616ef01e8d8a965f2efb510fe9",
"text": "The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.",
"title": ""
}
] |
[
{
"docid": "00108ade18d287efa5a06ffe8a3fda59",
"text": "Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, as found on the ARM Cortex A15 and x86 architectures with Intel VT-x or AMD-V support. Hardware virtualization provides a way to partition physical resources, including processor cores, memory, and I/O devices, among guest virtual machines (VMs). Each VM is then able to host tasks of a specific criticality level, as part of a mixed-criticality system with different timing and safety requirements. However, traditional virtual machine systems are inappropriate for mixed-criticality computing. They use hypervisors to schedule separate VMs on physical processor cores. The costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests are too expensive for many time-critical tasks. Additionally, traditional hypervisors have memory footprints that are often too large for many embedded computing systems. In this article, we discuss the design of the Quest-V separation kernel, which partitions services of different criticality levels across separate VMs, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention from a hypervisor. In Quest-V, a hypervisor is only needed to bootstrap the system, recover from certain faults, and establish communication channels between sandboxes. This not only reduces the memory footprint of the most privileged protection domain but also removes it from the control path during normal system operation, thereby heightening security.",
"title": ""
},
{
"docid": "b062222917050f13c3a17e8de53a6abe",
"text": "Exposed to traditional language learning strategies, students will gradually lose interest in and motivation to not only learn English, but also any language or culture. Hence, researchers are seeking technology-based learning strategies, such as digital game-mediated language learning, to motivate students and improve learning performance. This paper synthesizes the findings of empirical studies focused on the effectiveness of digital games in language education published within the last five years. Nine qualitative, quantitative, and mixed-method studies are collected and analyzed in this paper. The review found that recent empirical research was conducted primarily to examine the effectiveness by measuring language learning outcomes, motivation, and interactions. Weak proficiency was found in vocabulary retention, but strong proficiency was present in communicative skills such as speaking. Furthermore, in general, students reported that they are motivated to engage in language learning when digital games are involved; however, the motivation is also observed to be weak due to the design of the game and/or individual differences. The most effective method used to stimulate interaction language learning process seems to be digital games, as empirical studies demonstrate that it effectively promotes language education. However, significant work is still required to provide clear answers with respect to innovative and effective learning practice.",
"title": ""
},
{
"docid": "dd911eff60469b32330c5627c288f19f",
"text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.",
"title": ""
},
{
"docid": "1a5b63ae29de488a64518abcde04fb2f",
"text": "A thorough review of available literature was conducted to inform of advancements in mobile LIDAR technology, techniques, and current and emerging applications in transportation. The literature review touches briefly on the basics of LIDAR technology followed by a more in depth description of current mobile LIDAR trends, including system components and software. An overview of existing quality control procedures used to verify the accuracy of the collected data is presented. A collection of case studies provides a clear description of the advantages of mobile LIDAR, including an increase in safety and efficiency. The final sections of the review identify current challenges the industry is facing, the guidelines that currently exist, and what else is needed to streamline the adoption of mobile LIDAR by transportation agencies. Unfortunately, many of these guidelines do not cover the specific challenges and concerns of mobile LIDAR use as many have been developed for airborne LIDAR acquisition and processing. From this review, there is a lot of discussion on “what” is being done in practice, but not a lot on “how” and “how well” it is being done. A willingness to share information going forward will be important for the successful use of mobile LIDAR.",
"title": ""
},
{
"docid": "574c07709b65749bc49dd35d1393be80",
"text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.",
"title": ""
},
{
"docid": "eeafcab155da5229bf26ddc350e37951",
"text": "Interferons (IFNs) are the hallmark of the vertebrate antiviral system. Two of the three IFN families identified in higher vertebrates are now known to be important for antiviral defence in teleost fish. Based on the cysteine patterns, the fish type I IFN family can be divided into two subfamilies, which possibly interact with distinct receptors for signalling. The fish type II IFN family consists of two members, IFN-γ with similar functions to mammalian IFN-γ and a teleost specific IFN-γ related (IFN-γrel) molecule whose functions are not fully elucidated. These two type II IFNs also appear to bind to distinct receptors to exert their functions. It has become clear that fish IFN responses are mediated by the host pattern recognition receptors and an array of transcription factors including the IFN regulatory factors, the Jak/Stat proteins and the suppressor of cytokine signalling (SOCS) molecules.",
"title": ""
},
{
"docid": "48623054af5217d48b05aed57a67ae66",
"text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.",
"title": ""
},
{
"docid": "3bbbdf4d6572e548106fc1d24b50cbc6",
"text": "Predicting the a↵ective valence of unknown multiword expressions is key for concept-level sentiment analysis. A↵ectiveSpace 2 is a vector space model, built by means of random projection, that allows for reasoning by analogy on natural language concepts. By reducing the dimensionality of a↵ective common-sense knowledge, the model allows semantic features associated with concepts to be generalized and, hence, allows concepts to be intuitively clustered according to their semantic and a↵ective relatedness. Such an a↵ective intuition (so called because it does not rely on explicit features, but rather on implicit analogies) enables the inference of emotions and polarity conveyed by multi-word expressions, thus achieving e cient concept-level sentiment analysis.",
"title": ""
},
{
"docid": "e96fddd8058e3dc98eb9f73aa387c9f9",
"text": "There is often the need to perform sentiment classification in a particular domain where no labeled document is available. Although we could make use of a general-purpose off-the-shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. In this paper, we explore the possibility of building domain-specific sentiment classifiers with unlabeled documents only. Our investigation indicates that in the word embeddings learned from the unlabeled corpus of a given domain, the distributed word representations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. Exploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific sentiment lexicon from just a few typical sentiment words (“seeds”). An important finding is that simple linear model based supervised learning algorithms (such as linear SVM) can actually work better than more sophisticated semi-supervised/transductive learning algorithms which represent the state-of-the-art technique for sentiment lexicon induction. The induced lexicon could be applied directly in a lexicon-based method for sentiment classification, but a higher performance could be achieved through a two-phase bootstrapping method which uses the induced lexicon to assign positive/negative sentiment scores to unlabeled documents first, a nd t hen u ses those documents found to have clear sentiment signals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as LSTM). On several benchmark datasets for document sentiment classification, our end-to-end pipelined approach which is overall unsupervised (except for a tiny set of seed words) outperforms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches.",
"title": ""
},
{
"docid": "ad7b715f434f3a500be8d52a047b9be1",
"text": "This paper presents a quantitative analysis of data collected by an online testing system for SQL \"select\" queries. The data was collected from almost one thousand students, over eight years. We examine which types of queries our students found harder to write. The seven types of SQL queries studied are: simple queries on one table; grouping, both with and without \"having\"; natural joins; simple and correlated sub-queries; and self-joins. The order of queries in the preceding sentence reflects the order of student difficulty we see in our data.",
"title": ""
},
{
"docid": "ab3e279524995fbd2d362fa726c69065",
"text": "In this work, we present an application of domain randomization and generative adversarial networks (GAN) to train a near real-time object detector for industrial electric parts, entirely in a simulated environment. Large scale availability of labelled real world data is typically rare and difficult to obtain in many industrial settings. As such here, only a few hundred of unlabelled real images are used to train a Cyclic-GAN network, in combination with various degree of domain randomization procedures. We demonstrate that this enables robust translation of synthetic images to the real world domain. We show that a combination of the original synthetic (simulation) and GAN translated images, when used for training a Mask-RCNN object detection network achieves greater than 0.95 mean average precision in detecting and classifying a collection of industrial electric parts. We evaluate the performance across different combinations of training data.",
"title": ""
},
{
"docid": "8bf63451cf6b83f3da4d4378de7bfd7f",
"text": "This paper presents a high-efficiency and smoothtransition buck-boost (BB) converter to extend the battery life of portable devices. Owing to the usage of four switches, the BB control topology needs to minimize the switching and conduction losses at the same time. Therefore, over a wide input voltage range, the proposed BB converter consumes minimum switching loss like the basic operation of buck or boost converter. Besides, the conduction loss is reduced by means of the reduction of the inductor current level. Especially, the proposed BB converter offers good line/load regulation and thus provides a smooth and stable output voltage when the battery voltage decreases. Simulation results show that the output voltage drops is very small during the whole battery life time and the output transition is very smooth during the mode transition by the proposed BB control scheme.",
"title": ""
},
{
"docid": "ea200dc100d77d8c156743bede4a965b",
"text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.",
"title": ""
},
{
"docid": "6059b4bbf5d269d0a5f1f596b48c1acb",
"text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.",
"title": ""
},
{
"docid": "abdf1edfb2b93b3991d04d5f6d3d63d3",
"text": "With the rapid growing of internet and networks applications, data security becomes more important than ever before. Encryption algorithms play a crucial role in information security systems. In this paper, we have a study of the two popular encryption algorithms: DES and Blowfish. We overviewed the base functions and analyzed the security for both algorithms. We also evaluated performance in execution speed based on different memory sizes and compared them. The experimental results show the relationship between function run speed and memory size.",
"title": ""
},
{
"docid": "3bda091d69af44f28cb3bd5893a5b8ef",
"text": "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types.",
"title": ""
},
{
"docid": "c240da3cde126606771de3e6b3432962",
"text": "Oscillations in the alpha and beta bands can display either an event-related blocking response or an event-related amplitude enhancement. The former is named event-related desynchronization (ERD) and the latter event-related synchronization (ERS). Examples of ERS are localized alpha enhancements in the awake state as well as sigma spindles in sleep and alpha or beta bursts in the comatose state. It was found that alpha band activity can be enhanced over the visual region during a motor task, or during a visual task over the sensorimotor region. This means ERD and ERS can be observed at nearly the same time; both form a spatiotemporal pattern, in which the localization of ERD characterizes cortical areas involved in task-relevant processing, and ERS marks cortical areas at rest or in an idling state.",
"title": ""
},
{
"docid": "34c41c33ce2cd7642cf29d8bfcab8a3f",
"text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.",
"title": ""
},
{
"docid": "6cf4315ecce8a06d9354ca2f2684113c",
"text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.",
"title": ""
},
{
"docid": "d51ddec1ea405d9bde56f3b3b6baefc7",
"text": "Background. Inconsistent data exist about the role of probiotics in the treatment of constipated children. The aim of this study was to investigate the effectiveness of probiotics in childhood constipation. Materials and Methods. In this placebo controlled trial, fifty-six children aged 4-12 years with constipation received randomly lactulose plus Protexin or lactulose plus placebo daily for four weeks. Stool frequency and consistency, abdominal pain, fecal incontinence, and weight gain were studied at the beginning, after the first week, and at the end of the 4th week in both groups. Results. Forty-eight patients completed the study. At the end of the fourth week, the frequency and consistency of defecation improved significantly (P = 0.042 and P = 0.049, resp.). At the end of the first week, fecal incontinence and abdominal pain improved significantly in intervention group (P = 0.030 and P = 0.017, resp.) but, at the end of the fourth week, this difference was not significant (P = 0.125 and P = 0.161, resp.). A significant weight gain was observed at the end of the 1st week in the treatment group. Conclusion. This study showed that probiotics had a positive role in increasing the frequency and improving the consistency at the end of 4th week.",
"title": ""
}
] |
scidocsrr
|
5c6e50513d395d2ed39b345149d45fbf
|
Annotating Characters in Literary Corpora: A Scheme, the CHARLES Tool, and an Annotated Novel
|
[
{
"docid": "67992d0c0b5f32726127855870988b01",
"text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.",
"title": ""
},
{
"docid": "75f895ff76e7a55d589ff30637524756",
"text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.",
"title": ""
}
] |
[
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
},
{
"docid": "8e654ace264f8062caee76b0a306738c",
"text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.",
"title": ""
},
{
"docid": "46950519803aba56a0cce475964b99d7",
"text": "The coverage problem in the field of robotics is the problem of moving a sensor or actuator over all points in a given region. Example applications of this problem are lawn mowing, spray painting, and aerial or underwater mapping. In this paper, I consider the single-robot offline version of this problem, i.e. given a map of the region to be covered, plan an efficient path for a single robot that sweeps the sensor or actuator over all points. One basic approach to this problem is to decompose the region into subregions, select a sequence of those subregions, and then generate a path that covers each subregion in turn. This paper addresses the problem of creating a good decomposition. Under certain assumptions, the cost to cover a polygonal subregion is proportional to its minimum altitude. An optimal decomposition then minimizes the sum of subregion altitudes. This paper describes an algorithm to find the minimal sum of altitudes (MSA) decomposition of a region with a polygonal boundary and polygonal holes. This algorithm creates an initial decomposition based upon multiple line sweeps and then applies dynamic programming to find the optimal decomposition. This paper describes the algorithm and reports results from an implementation. Several appendices give details and proofs regarding line sweep algorithms.",
"title": ""
},
{
"docid": "7ca66f5741b5ebe9a9f2cd15547f58dc",
"text": "A vehicle management system based on UHF band RFID technology is proposed. This system is applied for vehicle entering/leaving at road gates. The system consists of tag-on-car, reader antenna, reader controller, and the monitoring and commanding software. It could effective control the vehicles passing through road gate and record the vehicles' data. The entering time, leaving time, and tag number of each vehicle are all recorded and saved for further processing. By the benefit of UHF band long distance sensing ability, within nine meter the distance between vehicle and reader antenna, the signal can be accurately detected even the vehicle's speed at nearly 30 km/hr. The monitoring and commanding software can not only identify car owners' identities but also determine the gate to open or not. The accessories: video recording and pressure sensing components are flexible to add for enhancing the system's performance. This system has been tested in many field tests and the results shown that it is suitable for vehicle management and the related applications.",
"title": ""
},
{
"docid": "6e82e635682cf87a84463f01c01a1d33",
"text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "6650966d57965a626fd6f50afe6cd7a4",
"text": "This paper presents a generalized version of the linear threshold model for simulating multiple cascades on a network while allowing nodes to switch between them. The proposed model is shown to be a rapidly mixing Markov chain and the corresponding steady state distribution is used to estimate highly likely states of the cascades' spread in the network. Results on a variety of real world networks demonstrate the high quality of the estimated solution.",
"title": ""
},
{
"docid": "fbb71a8a7630350a7f33f8fb90b57965",
"text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.",
"title": ""
},
{
"docid": "a2ab7befeec6dbe3d8334ccf7f39fe1d",
"text": "We present a method for finding the boundaries between adjacent regions in an image, where “seed” areas have already been identified in the individual regions to be segmented. This method was motivated by the problem of finding the borders of cells in microscopy images, given a labelling of the nuclei in the images. The method finds the Voronoi region of each seed on a manifold with a metric controlled by local image properties. We discuss similarities to other methods based on image-controlled metrics, such as Geodesic Active Contours, and give a fast algorithm for computing the Voronoi regions. We validate our method against hand-traced boundaries for cell images.",
"title": ""
},
{
"docid": "3ca2933b896b6ab80ba91e00869b4f50",
"text": "In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development.",
"title": ""
},
{
"docid": "80b5030cbb923f32dc791409eb184a80",
"text": "Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function f which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.",
"title": ""
},
{
"docid": "fe03dc323c15d5ac390e67f9aa0415b8",
"text": "Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that they convey significant information about material properties and physical interactions.",
"title": ""
},
{
"docid": "da48aae7960f0871c91d4c6c9f5f44bf",
"text": "It is often difficult to ground text to precise time intervals due to the inherent uncertainty arising from either missing or multiple expressions at year, month, and day time granularities. We address the problem of estimating an excerpt-time model capturing the temporal scope of a given news article excerpt as a probability distribution over chronons. For this, we propose a semi-supervised distribution propagation framework that leverages redundancy in the data to improve the quality of estimated time models. Our method generates an event graph with excerpts as nodes and models various inter-excerpt relations as edges. It then propagates empirical excerpt-time models estimated for temporally annotated excerpts, to those that are strongly related but miss annotations. In our experiments, we first generate a test query set by randomly sampling 100 Wikipedia events as queries. For each query, making use of a standard text retrieval model, we then obtain top-10 documents with an average of 150 excerpts. From these, each temporally annotated excerpt is considered as gold standard. The evaluation measures are first computed for each gold standard excerpt for a single query, by comparing the estimated model with our method to the empirical model from the original expressions. Final scores are reported by averaging over all the test queries. Experiments on the English Gigaword corpus show that our method estimates significantly better time models than several baselines taken from the literature.",
"title": ""
},
{
"docid": "fc9a1db9842daa789b10aaff8fdbc996",
"text": "Time series clustering has become an important topic, particularly for similarity search amongst long time series such as those arising in bioinformatics. Unfortunately, existing methods for time series clustering that rely on the actual time series point values can become impractical since the methods do not scale well for longer time series, and many clustering algorithms do not easily handle high dimensional data. In this paper we propose a scalable method for time series clustering that replaces the time series point values with some global measures of the characteristics of the time series. These global measures are then clustered using a selforganising map, which performs additional dimension reduction. The proposed approach has been tested using some benchmark time series previously reported for time series clustering, and is shown to yield useful and robust clustering. The resulting clusters are similar to those produced by other methods, with some interesting variations that can be intuitively explained with knowledge of the global characteristics of the time series.",
"title": ""
},
{
"docid": "2d7ff73a3fb435bd11633f650b23172e",
"text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.",
"title": ""
},
{
"docid": "d29485bc844995b639bb497fb05fcb6a",
"text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "b33b2abdc858b25d3aae1e789bca535c",
"text": "Rapid urbanization creates new challenges and issues, and the smart city concept offers opportunities to rise to these challenges, solve urban problems and provide citizens with a better living environment. This paper presents an exhaustive literature survey of smart cities. First, it introduces the origin and main issues facing the smart city concept, and then presents the fundamentals of a smart city by analyzing its definition and application domains. Second, a data-centric view of smart city architectures and key enabling technologies is provided. Finally, a survey of recent smart city research is presented. This paper provides a reference to researchers who intend to contribute to smart city research and implementation. 世界范围内的快速城镇化给城市发展带来了很多新的问题和挑战, 智慧城市概念的出现, 为解决当前城市难题、提供更好的城市环境提供了有效的解决途径。论文介绍了智慧城市的起源, 总结了智慧城市领域的三个主要问题, 通过详细的综述性文献研究展开对这些问题的探讨。论文首先对智慧城市的定义和应用领域进行了归纳和分析, 然后研究了智慧城市的体系架构, 提出了智慧城市以数据为中心、多领域融合的相关特征, 并定义了以数据活化技术为核心的层次化体系架构, 并介绍了其中的关键技术, 最后选取了城市交通、城市群体行为、城市规划三个具有代表性的应用领域介绍了城市数据分析与处理的最新研究进展和存在问题。",
"title": ""
},
{
"docid": "91ef2853e45d9b82f92689e0b01e6d63",
"text": "BACKGROUND\nThis study sought to evaluate the efficacy of nonoperative compression in correcting pectus carinatum in children.\n\n\nMATERIALS AND METHODS\nChildren presenting with pectus carinatum between August 1999 and January 2004 were prospectively enrolled in this study. The management protocol included custom compressive bracing, strengthening exercises, and frequent clinical follow-up.\n\n\nRESULTS\nThere were 30 children seen for evaluation. Their mean age was 13 years (range, 3-16 years) and there were 26 boys and 4 girls. Of the 30 original patients, 6 never returned to obtain the brace, leaving 24 patients in the study. Another 4 subjects were lost to follow-up. For the remaining 20 patients who have either completed treatment or continue in the study, the mean duration of bracing was 16 months, involving an average of 3 follow-up visits and 2 brace adjustments. Five of these patients had little or no improvement due to either too short a follow-up or noncompliance with the bracing. The other 15 patients (75%) had a significant to complete correction. There were no complications encountered during the study period.\n\n\nCONCLUSION\nCompressive orthotic bracing is a safe and effective alternative to both invasive surgical correction and no treatment for pectus carinatum in children. Compliance is critical to the success of this management strategy.",
"title": ""
},
{
"docid": "fd18b3d4799d23735c48bff3da8fd5ff",
"text": "There is need for an Integrated Event Focused Crawling system to collect Web data about key events. When a disaster or other significant event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of event information. We propose intelligent event focused crawling for automatic event tracking and archiving, ultimately leading to effective access. We developed an event model that can capture key event information, and incorporated that model into a focused crawling algorithm. For the focused crawler to leverage the event model in predicting webpage relevance, we developed a function that measures the similarity between two event representations. We then conducted two series of experiments to evaluate our system about two recent events: California shooting and Brussels attack. The first experiment series evaluated the effectiveness of our proposed event model representation when assessing the relevance of webpages. Our event model-based representation outperformed the baseline method (topic-only); it showed better results in precision, recall, and F1-score with an improvement of 20% in F1-score. The second experiment series evaluated the effectiveness of the event model-based focused crawler for collecting relevant webpages from the WWW. Our event model-based focused crawler outperformed the state-of-the-art baseline focused crawler (best-first); it showed better results in harvest ratio with an average improvement of 40%.",
"title": ""
},
{
"docid": "417fe20322c4458c58553c6d0984cabe",
"text": "Neural Turing Machines (NTMs) are an instance of Memory Augmented Neural Networks, a new class of recurrent neural networks which decouple computation from memory by introducing an external memory unit. NTMs have demonstrated superior performance over Long Short-Term Memory Cells in several sequence learning tasks. A number of open source implementations of NTMs exist but are unstable during training and/or fail to replicate the reported performance of NTMs. This paper presents the details of our successful implementation of a NTM. Our implementation learns to solve three sequential learning tasks from the original NTM paper. We find that the choice of memory contents initialization scheme is crucial in successfully implementing a NTM. Networks with memory contents initialized to small constant values converge on average 2 times faster than the next best memory contents initialization scheme.",
"title": ""
}
] |
scidocsrr
|
beca077eb153f4fef0e3419a7517832a
|
Spatiotemporal Multi-Task Network for Human Activity Understanding
|
[
{
"docid": "43e3d3639d30d9e75da7e3c5a82db60a",
"text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.",
"title": ""
},
{
"docid": "47b4b22cee9d5693c16be296afe61982",
"text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.",
"title": ""
}
] |
[
{
"docid": "19bb054fb4c6398df99a84a382354d59",
"text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.",
"title": ""
},
{
"docid": "67e599e65a963f54356b78ce436096c2",
"text": "This paper establishes the existence of observable footprints that reveal the causal dispositions of the object categories appearing in collections of images. We achieve this goal in two steps. First, we take a learning approach to observational causal discovery, and build a classifier that achieves state-of-the-art performance on finding the causal direction between pairs of random variables, given samples from their joint distribution. Second, we use our causal direction classifier to effectively distinguish between features of objects and features of their contexts in collections of static images. Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.",
"title": ""
},
{
"docid": "a0f11651bb4674fd3b425a65fcbe1d58",
"text": "Two studies examined whether forgiveness in married couples is associated with better conflict resolution. Study 1 examined couples in their 3rd year of marriage and identified 2 forgiveness dimensions (retaliation and benevolence). Husbands' retaliatory motivation was a significant predictor of poorer wife-reported conflict resolution, whereas wives' benevolence motivation predicted husbands' reports of better conflict resolution. Examining longer term marriages, Study 2 identified three forgiveness dimensions (retaliation, avoidance and benevolence). Whereas wives' benevolence again predicted better conflict resolution, husbands' avoidance predicted wives' reports of poorer conflict resolution. All findings were independent of both spouses' marital satisfaction. The findings are discussed in terms of the importance of forgiveness for marital conflict and its implications for spouse goals. Future research directions on forgiveness are outlined.",
"title": ""
},
{
"docid": "f5ce55253aa69ca09fde79d6fd1c830d",
"text": "We present an approach for high-resolution video frame prediction by conditioning on both past frames and past optical flows. Previous approaches rely on resampling past frames, guided by a learned future optical flow, or on direct generation of pixels. Resampling based on flow is insufficient because it cannot deal with disocclusions. Generative models currently lead to blurry results. Recent approaches synthesis a pixel by convolving input patches with a predicted kernel. However, their memory requirement increases with kernel size. Here, we present spatially-displaced convolution (SDC) module for video frame prediction. We learn a motion vector and a kernel for each pixel and synthesize a pixel by applying the kernel at a displaced location in the source image, defined by the predicted motion vector. Our approach inherits the merits of both vector-based and kernel-based approaches, while ameliorating their respective disadvantages. We train our model on 428K unlabelled 1080p video game frames. Our approach produces state-of-the-art results, achieving an SSIM score of 0.904 on high-definition YouTube-8M videos, 0.918 on Caltech Pedestrian videos. Our model handles large motion effectively and synthesizes crisp frames with consistent motion.",
"title": ""
},
{
"docid": "0eea594d14beea7be624d9cffc543f12",
"text": "BACKGROUND\nLoss of the interproximal dental papilla may cause functional and, especially in the maxillary anterior region, phonetic and severe esthetic problems. The purpose of this study was to investigate whether the distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth could be correlated with the presence of the interproximal papilla in Taiwanese patients.\n\n\nMETHODS\nIn total, 200 interproximal sites of maxillary anterior teeth in 45 randomly selected patients were examined. Selected subjects were adult Taiwanese with fully erupted permanent dentition. The presence of the interproximal papilla was determined visually. If there was no visible space apical to the contact area, the papilla was recorded as being present. The distance from the contact point to the crest of bone was measured on standardized periapical radiographs using a paralleling technique with a RinnXCP holder.\n\n\nRESULTS\nData revealed that when the distance from the contact point to the bone crest on standardized periapical radiographs was 5 mm or less, the papillae were almost 100% present. When the distance was 6 mm, 51% of the papillae were present, and when the distance was 7 mm or greater, only 23% of the papillae were present.\n\n\nCONCLUSION\nThe distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth is highly associated with the presence or absence of the interproximal papilla in Taiwanese patients, and is a useful guide for clinical evaluation.",
"title": ""
},
{
"docid": "b2cc05224008233e6a9b807b76a1fbf5",
"text": "This paper presents a non-isolated, high boost ratio hybrid transformer dc-dc converter with applications for low voltage renewable energy sources. The proposed converter utilizes a hybrid transformer to transfer the inductive and capacitive energy simultaneously, achieving a high boost ratio with a smaller size magnetic component. As a result of incorporating the resonant operation mode into the traditional high boost ratio PWM converter, the turn off loss of the switch is reduced, increasing the efficiency of the converter under all load conditions. The input current ripple is also reduced because of the linear-sinusoidal hybrid waveforms. The voltage stresses on the active switch and diodes are maintained at a low level and are independent of the changing input voltage over a wide range as a result of the resonant capacitor transferring energy to the output. The effectiveness of the proposed converter was experimentally verified using a 220 W prototype circuit. Utilizing an input voltage ranging from 20V to 45V and a load range of 30W to 220W, the experimental results show system of efficiencies greater than 96% with a peak efficiency of 97.4% at 35V input, 160W output. Because of high efficiency over wide output power range and the ability to operate with a wide variable input voltage, the proposed converter is an attractive design for alternative low dc voltage energy sources, such as solar photovoltaic (PV) modules.",
"title": ""
},
{
"docid": "423228556cb473e0fab48a2dc57cbf6f",
"text": "This paper focus on the dynamic modeling and the LQR and PID controllers for the self balancing unicycle robot. The mechanism of the unicycle robot is designed. The pitching and rolling balance could be achieved by the driving of the motor on the wheel and the balance weight on the body of robot. The dynamic equations of the robot are presented based on the Routh equation. On this basis, the LQR and PID controllers of the unicycle robot are proposed. The experimentations of balance control are showed through the Simulink toolbox of Matlab. The simulation results show that the robot could achieve self balancing after a short period of time by the designed controllers. According to comparing the results, the errors of PID controller are relatively smaller than LQR. The response speed of LQR controller is faster than PID. At last a kind of LQR&PID controller is proposed. This controller has the advantages of both LQR and PID controllers.",
"title": ""
},
{
"docid": "9901be4dddeb825f6443d75a6566f2d0",
"text": "In this paper a new approach to gas leakage detection in high pressure natural gas transportation networks is proposed. The pipeline is modelled as a Linear Parameter Varying (LPV) System driven by the source node massflow with the gas inventory variation in the pipe (linepack variation, proportional to the pressure variation) as the scheduling parameter. The massflow at the offtake node is taken as the system output. The system is identified by the Successive Approximations LPV System Subspace Identification Algorithm which is also described in this paper. The leakage is detected using a Kalman filter where the fault is treated as an augmented state. Given that the gas linepack can be estimated from the massflow balance equation, a differential method is proposed to improve the leakage detector effectiveness. A small section of a gas pipeline crossing Portugal in the direction South to North is used as a case study. LPV models are identified from normal operational data and their accuracy is analyzed. The proposed LPV Kalman filter based methods are compared with a standard mass balance method in a simulated 10% leakage detection scenario. The Differential Kalman Filter method proved to be highly efficient.",
"title": ""
},
{
"docid": "39673b789ee8d8c898c93b7627b31f0a",
"text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.",
"title": ""
},
{
"docid": "5ce4f8227c5eebfb8b7b1dffc5557712",
"text": "In this paper, we propose a novel approach for face spoofing detection using the high-order Local Derivative Pattern from Three Orthogonal Planes (LDP-TOP). The proposed method is not only simple to derive and implement, but also highly efficient, since it takes into account both spatial and temporal information in different directions of subtle face movements. According to experimental results, the proposed approach outperforms state-of-the-art methods on three reference datasets, namely Idiap REPLAY-ATTACK, CASIA-FASD, and MSU MFSD. Moreover, it requires only 25 video frames from each video, i.e., only one second, and thus potentially can be performed in real time even on low-cost devices.",
"title": ""
},
{
"docid": "b02d9621ee919bccde66418e0681d1e6",
"text": "A great deal of work has been done on the evaluation of information retrieval systems for alphanumeric data. The same thing can not be said about the newly emerging multimedia and image database systems. One of the central concerns in these systems is the automatic characterization of image content and retrieval of images based on similarity of image content. In this paper, we discuss effectiveness of several shape measures for content based similarity retrieval of images. The different shape measures we have implemented include outline based features (chain code based string features, Fourier descriptors, UNL Fourier features), region based features (invariant moments, Zemike moments, pseudoZemike moments), and combined features (invariant moments & Fourier descriptors, invariant moments & UNL Fourier features). Given an image, all these shape feature measures (vectors) are computed automatically, and the feature vector can either be used for the retrieval purpose or can be stored in the database for future queries. We have tested all of the above shape features for image retrieval on a database of 500 trademark images. The average retrieval efficiency values computed over a set of fifteen representative queries for all the methods is presented. The output of a sample shape similarity query using all the features is also shown.",
"title": ""
},
{
"docid": "ccac025250d397a5bcc6a5f847d2cc81",
"text": "With the widespread clinical use of comparative genomic hybridization chromosomal microarray technology, several previously unidentified clinically significant submicroscopic chromosome abnormalities have been discovered. Specifically, there have been reports of clinically significant microduplications found in regions of known microdeletion syndromes. In general, these microduplications have distinct features from those described in the corresponding microdeletion syndromes. We present a 5½-year-old patient with normal growth, borderline normal IQ, borderline hypertelorism, and speech and language delay who was found to have a submicroscopic 2.3 Mb terminal duplication involving the two proposed Wolf-Hirschhorn syndrome (WHS) critical regions at chromosome 4p16.3. This duplication was the result of a maternally inherited reciprocal translocation involving the breakpoints 4p16.3 and 17q25.3. Our patient's features are distinct from those described in WHS and are not as severe as those described in partial trisomy 4p. There are two other patients in the medical literature with 4p16.3 microduplications of similar size also involving the WHS critical regions. Our patient shows clinical overlap with these two patients, although overall her features are milder than what has been previously described. Our patient's features expand the knowledge of the clinical phenotype of a 4p16.3 microduplication and highlight the need for further information about it.",
"title": ""
},
{
"docid": "c9e3521029a45be5e32d79700a096083",
"text": "In this paper, we propose Dynamics Transfer GAN; a new method for generating video sequences based on generative adversarial learning. The spatial constructs of a generated video sequence are acquired from the target image. The dynamics of the generated video sequence are imported from a source video sequence, with arbitrary motion, and imposed onto the target image. To preserve the spatial construct of the target image, the appearance of the source video sequence is suppressed and only the dynamics are obtained before being imposed onto the target image. That is achieved using the proposed appearance suppressed dynamics feature. Moreover, the spatial and temporal consistencies of the generated video sequence are verified via two discriminator networks. One discriminator validates the fidelity of the generated frames appearance, while the other validates the dynamic consistency of the generated video sequence. Experiments have been conducted to verify the quality of the video sequences generated by the proposed method. The results verified that Dynamics Transfer GAN successfully transferred arbitrary dynamics of the source video sequence onto a target image when generating the output video sequence. The experimental results also showed that Dynamics Transfer GAN maintained the spatial constructs (appearance) of the target image while generating spatially and temporally consistent video sequences.",
"title": ""
},
{
"docid": "e00295dc86476d1d350d11068439fe87",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "12266d895ea552965d9bc06b676b2cab",
"text": "A new concept development and practical implementation of an OFDM based secondary cognitive link are presented in this paper. Coexistence of a secondary user employing Orthogonal Frequency Division Multiplexing (OFDM) and a primary user employing Frequency Hopping (FH) is achieved. Secondary and primary links are realized using Universal Software Radio Peripheral (USRP) N210 platforms. Cognitive features of spectrum sensing and changing transmission parameters are implemented. Some experimental results are presented.",
"title": ""
},
{
"docid": "1a620e17048fa25cfc54f5c9fb821f39",
"text": "The performance of a detector depends much on its training dataset and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In order to bridge this appearance gap, we propose a deep model to automatically learn scene-specific features and visual patterns in static video surveillance without any manual labels from the target scene. It jointly learns a scene-specific classifier and the distribution of the target samples. Both tasks share multi-scale feature representations with both discriminative and representative power. We also propose a cluster layer in the deep model that utilizes the scenespecific visual patterns for pedestrian detection. Our specifically designed objective function not only incorporates the confidence scores of target training samples but also automatically weights the importance of source training samples by fitting the marginal distributions of target samples. It significantly improves the detection rates at 1 FPPI by 10% compared with the state-of-the-art domain adaptation methods on MIT Traffic Dataset and CUHK Square Dataset.",
"title": ""
},
{
"docid": "f518ee9b64721866d69f8d1982200c72",
"text": "Bradyrhizobium japonicum is one of the soil bacteria that form nodules on soybean roots. The cell has two sets of flagellar systems, one thick flagellum and a few thin flagella, uniquely growing at subpolar positions. The thick flagellum appears to be semicoiled in morphology, and the thin flagella were in a tight-curly form as observed by dark-field microscopy. Flagellin genes were identified from the amino acid sequence of each flagellin. Flagellar genes for the thick flagellum are scattered into several clusters on the genome, while those genes for the thin flagellum are compactly organized in one cluster. Both types of flagella are powered by proton-driven motors. The swimming propulsion is supplied mainly by the thick flagellum. B. japonicum flagellar systems resemble the polar-lateral flagellar systems of Vibrio species but differ in several aspects.",
"title": ""
},
{
"docid": "d51f0b51f03e310dd183e3a7cb199288",
"text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.",
"title": ""
},
{
"docid": "2a9c8e0b6c08905fc04415d36432afe0",
"text": "Technological advancements have led to the development of numerous wearable robotic devices for the physical assistance and restoration of human locomotion. While many challenges remain with respect to the mechanical design of such devices, it is at least equally challenging and important to develop strategies to control them in concert with the intentions of the user. This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic (P/O) devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system. This review underscores the practical challenges and opportunities associated with P/O control, which can be used to accelerate future developments in this field. Furthermore, this work provides a classification scheme for the comparison of the various control strategies. As a novel contribution, a general framework for the control of portable gait-assistance devices is proposed. This framework accounts for the physical and informatic interactions between the controller, the user, the environment, and the mechanical device itself. Such a treatment of P/Os – not as independent devices, but as actors within an ecosystem – is suggested to be necessary to structure the next generation of intelligent and multifunctional controllers. Each element of the proposed framework is discussed with respect to the role that it plays in the assistance of locomotion, along with how its states can be sensed as inputs to the controller. The reviewed controllers are shown to fit within different levels of a hierarchical scheme, which loosely resembles the structure and functionality of the nominal human central nervous system (CNS). Active and passive safety mechanisms are considered to be central aspects underlying all of P/O design and control, and are shown to be critical for regulatory approval of such devices for real-world use. The works discussed herein provide evidence that, while we are getting ever closer, significant challenges still exist for the development of controllers for portable powered P/O devices that can seamlessly integrate with the user’s neuromusculoskeletal system and are practical for use in locomotive ADL.",
"title": ""
},
{
"docid": "8c2d6aac36ea2c10463ad05fc5f9b854",
"text": "Motion planning plays a key role in autonomous driving. In this work, we introduce the combinatorial aspect of motion planning which tackles the fact that there are usually many possible and locally optimal solutions to accomplish a given task. Those options we call maneuver variants. We argue that by partitioning the trajectory space into discrete solution classes, such that local optimization methods yield an optimum within each discrete class, we can improve the chance of finding the global optimum as the optimum trajectory among the manuever variants. This work provides methods to enumerate the maneuver variants as well as constraints to enforce them. The return of the effort put into the problem modification as suggested is gaining assuredness in the convergency behaviour of the optimization algorithm. We show an experiment where we identify three local optima that would not have been found with local optimization methods.",
"title": ""
}
] |
scidocsrr
|
02945455bace14295528dd3daf6f847d
|
Magnetic induction for MWD telemetry system
|
[
{
"docid": "dba3434c600ed7ddbb944f0a3adb1ba0",
"text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.",
"title": ""
}
] |
[
{
"docid": "bb570de5244b6d4bd066244722060830",
"text": "Impact happens when two or more bodies collide, generating very large impulsive forces in a very short period of time during which kinetic energy is first absorbed and then released after some loss. This paper introduces a state transition diagram to model a frictionless multibody collision. Each state describes a different topology of the collision characterized by the set of instantaneously active contacts. A change of state happens when a contact disappears at the end of restitution, or when a disappeared contact reappears as the relative motion of two bodies goes from separation into penetration. Within a state, (normal) impulses are coupled differentially subject to relative stiffnesses at the active contact points and the strain energies stored there. Such coupling may cause restart of compression from restitution during a single impact. Impulses grow along a bounded curve with first-order continuity, and converge during the state transitions. To solve a multibody collision problem with friction and tangential compliance, the above impact model is integrated with a compliant impact model. The paper compares model predictions to a physical experiment for the massé shot, which is a difficult trick in billiards, with a good result.",
"title": ""
},
{
"docid": "54d293423026d84bce69e8e073ebd6ac",
"text": "AIMS\nPredictors of Response to Cardiac Resynchronization Therapy (CRT) (PROSPECT) was the first large-scale, multicentre clinical trial that evaluated the ability of several echocardiographic measures of mechanical dyssynchrony to predict response to CRT. Since response to CRT may be defined as a spectrum and likely influenced by many factors, this sub-analysis aimed to investigate the relationship between baseline characteristics and measures of response to CRT.\n\n\nMETHODS AND RESULTS\nA total of 286 patients were grouped according to relative reduction in left ventricular end-systolic volume (LVESV) after 6 months of CRT: super-responders (reduction in LVESV > or =30%), responders (reduction in LVESV 15-29%), non-responders (reduction in LVESV 0-14%), and negative responders (increase in LVESV). In addition, three subgroups were formed according to clinical and/or echocardiographic response: +/+ responders (clinical improvement and a reduction in LVESV > or =15%), +/- responders (clinical improvement or a reduction in LVESV > or =15%), and -/- responders (no clinical improvement and no reduction in LVESV > or =15%). Differences in clinical and echocardiographic baseline characteristics between these subgroups were analysed. Super-responders were more frequently females, had non-ischaemic heart failure (HF), and had a wider QRS complex and more extensive mechanical dyssynchrony at baseline. Conversely, negative responders were more frequently in New York Heart Association class IV and had a history of ventricular tachycardia (VT). Combined positive responders after CRT (+/+ responders) had more non-ischaemic aetiology, more extensive mechanical dyssynchrony at baseline, and no history of VT.\n\n\nCONCLUSION\nSub-analysis of data from PROSPECT showed that gender, aetiology of HF, QRS duration, severity of HF, a history of VT, and the presence of baseline mechanical dyssynchrony influence clinical and/or LV reverse remodelling after CRT. Although integration of information about these characteristics would improve patient selection and counselling for CRT, further randomized controlled trials are necessary prior to changing the current guidelines regarding patient selection for CRT.",
"title": ""
},
{
"docid": "2df316f30952ffdb4da1e9797b9658bb",
"text": "Breast cancer is a leading disease worldwide, and the success of medical therapies is heavily related to the availability of breast cancer imaging techniques. While current methods, mainly ultrasound, x-ray mammography, and magnetic resonance imaging, all exhibit some disadvantages, a possible alternative investigated in recent years is based on microwave and mm-wave imaging system. A key point for these systems is their reliability in terms of safety, in particular exposure limits. This paper presents a feasibility study for a mm-wave breast cancer imaging system, with the aim of ensuring safety and compliance with the widely adopted European ICNIRP recommendations. The study is based on finite element method models of human tissues, experimentally characterized by measures obtained at one of the most important European clinical center for cancer treatments. Results prove the feasibility of the system, which can meet the exposure limits while providing the required dynamic range to let the receiver detect the cancer anomaly. In addition, the dosimetric quantities used at the present and their maximum limits at mm-waves are taking into discussion and the possibility of needing moderns quantities and limitations is discussed.",
"title": ""
},
{
"docid": "50e9cf4ff8265ce1567a9cc82d1dc937",
"text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models",
"title": ""
},
{
"docid": "deb2e0c23d3d9ad4d37a8f23bb2280f5",
"text": "The purpose of this study was to test if replacement of trans fatty acids by palmitic acid in an experimental margarine results in unfavourable effects on serum lipids and haemostatic factors. We have compared the effects of three different margarines, one based on palm oil (PALM-margarine), one based on partially hydrogenated soybean oil (TRANS- margarine) and one with a high content of polyunsaturated fatty acids (PUFA-margarine), on serum lipids in 27 young women. In nine of the participants fasting levels and diurnal postprandial levels of haemostatic variables on the 3 diets were compared. The sum of 12:0, 14:0, 16:0 provided 11% of energy (E%) in the PALM diet, the same as the sum of 12:0, 14:0, 16:0 and trans fatty acids in the TRANS-diet. Oleic acid provided 10-11E% in all three diets, while PUFA provided 5.7, 5.5 and 10.2 E%, respectively. Total fat provided 30-31% and the test margarines 26% of total energy in all three diets. Each of the diets was consumed for 17 days in a crossover design. There were no significant differences in total cholesterol, LDL-cholesterol and apoB between the TRANS- and the PALM-diet. HDL-cholesterol and apoA-I were significantly higher on the PALM-diet compared to the TRANS-diet while the ratio of LDL- to HDL-cholesterol was lower, although not significantly (P = 0.077) on the PALM-diet. Total cholesterol, LDL-cholesterol and apoB were significantly lower on the PUFA-diet compared to the two other diets. HDL-cholesterol was not different on the PALM- and the PUFA-diet while it was significantly lower on the TRANS-diet compared to the PUFA-diet. Triglycerides and Lp(a) were not different among the three diets. The diurnal postprandial state level of tissue plasminogen activator (t-PA) activity was significantly decreased on the TRANS-diet compared to the PALM-diet. t-PA activity was also decreased on the PUFA-diet compared to PALM-diet although not significantly (P=0.07). There were no significant differences in neither fasting levels or in circadian variation of t-PA antigen, PAI-1 activity, PAI-1 antigen, factor VII coagulant activity or fibrinogen between the three diets. Our results suggest that dietary palm oil may have a more favourable effect on the fibrinolytic system compared to partially hydrogenated soybean oil. We conclude that from a nutritional point of view, palmitic acid from palm oil may be a reasonable alternative to trans fatty acids from partially hydrogenated soybean oil in margarine if the aim is to avoid trans fatty acids. A palm oil based margarine is, however, less favourable than one based on a more polyunsaturated vegetable oil.",
"title": ""
},
{
"docid": "b8f50ba62325ffddcefda7030515fd22",
"text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.",
"title": ""
},
{
"docid": "fb8e6eac761229fc8c12339fb68002ed",
"text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.",
"title": ""
},
{
"docid": "cf88fe250c9dd50caf4f462acdd71238",
"text": "We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications that process the same inputs to successfully eliminate errors in the recipient. Experimental results using seven donor applications to eliminate ten errors in seven recipient applications highlight the ability of CP to transfer code across applications to eliminate out of bounds access, integer overflow, and divide by zero errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to automatically transfer code across multiple applications.",
"title": ""
},
{
"docid": "4c175d69ae46f58dc217984192b1a0f0",
"text": "Haptic interaction is an increasingly common form of interaction in virtual environment (VE) simulations. This medium introduces some new challenges. In this paper we study the problem arising from the difference between the sampling rate requirements of haptic interfaces and the significantly lower update rates of the physical models being manipulated. We propose a multirate simulation approach which uses a local linear approximation. The treatment includes a detailed analysis and experimental verification of the approach. The proposed method is also shown to improve the stability of the haptic interaction.",
"title": ""
},
{
"docid": "8d3c1e649e40bf72f847a9f8ac6edf38",
"text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.",
"title": ""
},
{
"docid": "1042329bbc635f1b39a5d15d795be8a3",
"text": "In this work we present a method to estimate a 3D face shape from a single image. Our method is based on a cascade regression framework that directly estimates face landmarks locations in 3D. We include the knowledge that a face is a 3D object into the learning pipeline and show how this information decreases localization errors while keeping the computational time low. We predict the actual positions of the landmarks even if they are occluded due to face rotation. To support the ability of our method to reliably reconstruct 3D shapes, we introduce a simple method for head pose estimation using a single image that reaches higher accuracy than the state of the art. Comparison of 3D face landmarks localization with the available state of the art further supports the feasibility of a single-step face shape estimation. The code, trained models and our 3D annotations will be made available to the research community.",
"title": ""
},
{
"docid": "aabef3695f38fdf565700e5e374098fd",
"text": "T are two broad categories of risk affecting supply chain design and management: (1) risks arising from the problems of coordinating supply and demand, and (2) risks arising from disruptions to normal activities. This paper is concerned with the second category of risks, which may arise from natural disasters, from strikes and economic disruptions, and from acts of purposeful agents, including terrorists. The paper provides a conceptual framework that reflects the joint activities of risk assessment and risk mitigation that are fundamental to disruption risk management in supply chains. We then consider empirical results from a rich data set covering the period 1995–2000 on accidents in the U.S. Chemical Industry. Based on these results and other literature, we discuss the implications for the design of management systems intended to cope with supply chain disruption risks.",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "6ffcdaafcda083517bbfe4fa06f5df87",
"text": "This paper reports a qualitative study designed to investigate the issues of cybersafety and cyberbullying and report how students are coping with them. Through discussion with 74 students, aged from 10 to 17, in focus groups divided into three age levels, data were gathered in three schools in Victoria, Australia, where few such studies had been set. Social networking sites and synchronous chat sites were found to be the places where cyberbullying most commonly occurred, with email and texting on mobile phones also used for bullying. Grades 8 and 9 most often reported cyberbullying and also reported behaviours and internet contacts that were cybersafety risks. Most groups preferred to handle these issues themselves or with their friends rather then alert parents and teachers who may limit their technology access. They supported education about these issues for both adults and school students and favoured a structured mediation group of their peers to counsel and advise victims.",
"title": ""
},
{
"docid": "9a00d5d6585cb766be0459bbdb76a612",
"text": "Nature within cities will have a central role in helping address key global public health challenges associated with urbanization. However, there is almost no guidance on how much or how frequently people need to engage with nature, and what types or characteristics of nature need to be incorporated in cities for the best health outcomes. Here we use a nature dose framework to examine the associations between the duration, frequency and intensity of exposure to nature and health in an urban population. We show that people who made long visits to green spaces had lower rates of depression and high blood pressure, and those who visited more frequently had greater social cohesion. Higher levels of physical activity were linked to both duration and frequency of green space visits. A dose-response analysis for depression and high blood pressure suggest that visits to outdoor green spaces of 30 minutes or more during the course of a week could reduce the population prevalence of these illnesses by up to 7% and 9% respectively. Given that the societal costs of depression alone in Australia are estimated at AUD$12.6 billion per annum, savings to public health budgets across all health outcomes could be immense.",
"title": ""
},
{
"docid": "0bfdad99e0762951f5cc57026cd364c9",
"text": "Causal effects are defined as comparisons of potential outcomes under different treatments on a common set of units. Observed values of the potential outcomes are revealed by the assignment mechanism—a probabilistic model for the treatment each unit receives as a function of covariates and potential outcomes. Fisher made tremendous contributions to causal inference through his work on the design of randomized experiments, but the potential outcomes perspective applies to other complex experiments and nonrandomized studies as well. As noted by Kempthorne in his 1976 discussion of Savage’s Fisher lecture, Fisher never bridged his work on experimental design and his work on parametric modeling, a bridge that appears nearly automatic with an appropriate view of the potential outcomes framework, where the potential outcomes and covariates are given a Bayesian distribution to complete the model specification. Also, this framework crisply separates scientific inference for causal effects and decisions based on such inference, a distinction evident in Fisher’s discussion of tests of significance versus tests in an accept/reject framework. But Fisher never used the potential outcomes framework, originally proposed by Neyman in the context of randomized experiments, and as a result he provided generally flawed advice concerning the use of the analysis of covariance to adjust for posttreatment concomitants in randomized trials.",
"title": ""
},
{
"docid": "6d23bd2813ea3785b8b20d24e31279d8",
"text": "General-purpose GPUs have been widely utilized to accelerate parallel applications. Given a relatively complex programming model and fast architecture evolution, producing efficient GPU code is nontrivial. A variety of simulation and profiling tools have been developed to aid GPU application optimization and architecture design. However, existing tools are either limited by insufficient insights or lacking in support across different GPU architectures, runtime and driver versions. This paper presents CUDAAdvisor, a profiling framework to guide code optimization in modern NVIDIA GPUs. CUDAAdvisor performs various fine-grained analyses based on the profiling results from GPU kernels, such as memory-level analysis (e.g., reuse distance and memory divergence), control flow analysis (e.g., branch divergence) and code-/data-centric debugging. Unlike prior tools, CUDAAdvisor supports GPU profiling across different CUDA versions and architectures, including CUDA 8.0 and Pascal architecture. We demonstrate several case studies that derive significant insights to guide GPU code optimization for performance improvement.",
"title": ""
},
{
"docid": "2504c87326f94f26a1209e197d351ecb",
"text": "This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.",
"title": ""
},
{
"docid": "09bb06388c9018c205c09406b360692b",
"text": "Detecting anomalies in large-scale, streaming datasets has wide applicability in a myriad of domains like network intrusion detection for cyber-security, fraud detection for credit cards, system health monitoring, and fault detection in safety critical systems. Due to its wide applicability, the problem of anomaly detection has been well-studied by industry and academia alike, and many algorithms have been proposed for detecting anomalies in different problem settings. But until recently, there was no openly available, systematic dataset and/or framework using which the proposed anomaly detection algorithms could be compared and evaluated on a common ground. Numenta Anomaly Benchmark (NAB), made available by Numenta1 in 2015, addressed this gap by providing a set of openly-available, labeled data files and a common scoring system, using which different anomaly detection algorithms could be fairly evaluated and compared. In this paper, we provide an in-depth analysis of the key aspects of the NAB framework, and highlight inherent challenges therein, with the objective to provide insights about the gaps in the current framework that must be addressed so as to make it more robust and easy-to-use. Furthermore, we also provide additional evaluation of five state-of-the-art anomaly detection algorithms (including the ones proposed by Numenta) using the NAB datasets, and based on the evaluation results, we argue that the performance of these algorithms is not sufficient for practical, industry-scale applications, and must be improved upon so as to make them suitable for large-scale anomaly detection problems.",
"title": ""
},
{
"docid": "b216a38960c537d52d94adc8d50a43df",
"text": "BACKGROUND\nAutologous platelet-rich plasma has attracted attention in various medical fields recently, including orthopedic, plastic, and dental surgeries and dermatology for its wound healing ability. Further, it has been used clinically in mesotherapy for skin rejuvenation.\n\n\nOBJECTIVE\nIn this study, the effects of activated platelet-rich plasma (aPRP) and activated platelet-poor plasma (aPPP) have been investigated on the remodelling of the extracellular matrix, a process that requires activation of dermal fibroblasts, which is essential for rejuvenation of aged skin.\n\n\nMETHODS\nPlatelet-rich plasma (PRP) and platelet-poor plasma (PPP) were prepared using a double-spin method and then activated with thrombin and calcium chloride. The proliferative effects of aPRP and aPPP were measured by [(3)H]thymidine incorporation assay, and their effects on matrix protein synthesis were assessed by quantifying levels of procollagen type I carboxy-terminal peptide (PIP) by enzyme-linked immunosorbent assay (ELISA). The production of collagen and matrix metalloproteinases (MMP) was studied by Western blotting and reverse transcriptase-polymerase chain reaction.\n\n\nRESULTS\nPlatelet numbers in PRP increased to 9.4-fold over baseline values. aPRP and aPPP both stimulated cell proliferation, with peak proliferation occurring in cells grown in 5% aPRP. Levels of PIP were highest in cells grown in the presence of 5% aPRP. Additionally, aPRP and aPPP increased the expression of type I collagen, MMP-1 protein, and mRNA in human dermal fibroblasts.\n\n\nCONCLUSION\naPRP and aPPP promote tissue remodelling in aged skin and may be used as adjuvant treatment to lasers for skin rejuvenation in cosmetic dermatology.",
"title": ""
}
] |
scidocsrr
|
15b6fd9c2de98c7ccab4ec576e555f04
|
Rules and Ontology Based Data Access
|
[
{
"docid": "7ef20dc3eb5ec7aee75f41174c9fae12",
"text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.",
"title": ""
},
{
"docid": "dd7d17c7f36f74ea79832f9426dc936d",
"text": "In the context of the emerging Semantic Web and the quest for a common logical framework underpinning its architecture, the relation of rule-based languages such as Answer Set Programming (ASP) and ontology languages such as OWL has attracted a lot of attention in the literature over the past years. With its roots in Deductive Databases and Datalog though, ASP shares much more commonality with another Semantic Web standard, namely the query language SPARQL. In this paper, we take the recent approval of the SPARQL1.1 standard by the World Wide Web consortium (W3C) as an opportunity to introduce this standard to the Logic Programming community by providing a translation of SPARQL1.1 into ASP. In this translation, we explain and highlight peculiarities of the new W3C standard. Along the way, we survey existing literature on foundations of SPARQL and SPARQL1.1, and also combinations of SPARQL with ontology and rules languages. Thereby, apart from providing means to implement and support SPARQL natively within Logic Programming engines and particularly ASP engines, we hope to pave the way for further research on a common logical framework for Semantic Web languages, including query languages, from an ASP point of view. 1Vienna University of Economics and Business (WU Wien), Welthandelsplatz 1, 1020 Vienna, Austria E-mail: axel.polleres@wu.ac.at 2Institute for Information Systems 184/2, Technische Universität Wien, Favoritenstrasse 9-11, 1040 Vienna, Austria. E-mail: wallner@dbai.tuwien.ac.at A journal version of this article has been published in JANCL. Please cite as: A. Polleres and J.P. Wallner. On the relation between SPARQL1.1 and Answer Set Programming. Journal of Applied Non-Classical Logics (JANCL), 23(1-2):159-212, 2013. Special issue on Equilibrium Logic and Answer Set Programming. Copyright c © 2014 by the authors TECHNICAL REPORT DBAI-TR-2013-84 2",
"title": ""
}
] |
[
{
"docid": "ad7852de8e1f80c68417c459d8a12e15",
"text": "Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients – a key element in generative adversarial network training – using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.",
"title": ""
},
{
"docid": "6c713b3d6c68830915f15bc5b327b301",
"text": "Journal of Cutaneous and Aesthetic Surgery ¦ Volume 10 ¦ Issue 2 ¦ April‐June 2017 118 2003;9:CS1‐4. 4. Abrahamson TG, Davis DA. Angiolymphoid hyperplasia with eosinophilia responsive to pulsed dye laser. J Am Acad Dermatol 2003;49:S195‐6. 5. Kaur T, Sandhu K, Gupta S, Kanwar AJ, Kumar B. Treatment of angiolymphoid hyperplasia with eosinophilia with the carbon dioxide laser. J Dermatolog Treat 2004;15:328‐30. 6. Akdeniz N, Kösem M, Calka O, Bilgili SG, Metin A, Gelincik I. Intralesional bleomycin for angiolymphoid hyperplasia. Arch Dermatol 2007;143:841‐4.",
"title": ""
},
{
"docid": "b18ecc94c1f42567b181c49090b03d8a",
"text": "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "b3bcf4d5962cd2995d21cfbbe9767b9d",
"text": "In computer, Cloud of Things (CoT) it is a Technique came by integrated two concepts Internet of Things(IoT) and Cloud Computing. Therefore, Cloud of Things is a currently a wide area of research and development. This paper discussed the concept of Cloud of Things (CoT) in detail and explores the challenges, open research issues, and various tools that can be used with Cloud of Things (CoT). As a result, this paper gives a knowledge and platform to explore Cloud of Things (CoT), and it gives new ideas for researchers to find the open research issues and solution to challenges.",
"title": ""
},
{
"docid": "ae3770d75796453f83329b676fa884ba",
"text": "This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S3FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchorbased detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-theart detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"title": ""
},
{
"docid": "a212c06f01d746779da52c6ead7e185c",
"text": "Existing visual tracking methods usually localize the object with a bounding box, in which the foreground object trackers/detectors are often disturbed by the introduced background information. To handle this problem, we aim to learn a more robust object representation for visual tracking. In particular, the tracked object is represented with a graph structure (i.e., a set of non-overlapping image patches), in which the weight of each node (patch) indicates how likely it belongs to the foreground and edges are also weighed for indicating the appearance compatibility of two neighboring nodes. This graph is dynamically learnt (i.e., the nodes and edges received weights) and applied in object tracking and model updating. We constrain the graph learning from two aspects: i) the global low-rank structure over all nodes and ii) the local sparseness of node neighbors. During the tracking process, our method performs the following steps at each frame. First, the graph is initialized by assigning either 1 or 0 to the weights of some image patches according to the predicted bounding box. Second, the graph is optimized through designing a new ALM (Augmented Lagrange Multiplier) based algorithm. Third, the object feature representation is updated by imposing the weights of patches on the extracted image features. The object location is finally predicted by adopting the Struck tracker (Hare, Saffari, and Torr 2011). Extensive experiments show that our approach outperforms the state-of-the-art tracking methods on two standard benchmarks, i.e., OTB100 and NUS-PRO.",
"title": ""
},
{
"docid": "1611448ce90278a329b1afe8fe598ba9",
"text": "This paper is devoted to some mathematical considerations on the geometrical ideas contained in PNK, CN and, successively, in PR. Mainly, we will emphasize that these ideas give very promising suggestions for a modern point-free foundation of geometry. 1. Introduction Recently the researches in point-free geometry received an increasing interest in different areas. As an example, we can quote computability theory, lattice theory, computer science. Now, the basic ideas of point-free geometry were firstly formulated by A. N. Whitehead in PNK and CN where the extension relation between events is proposed as a primitive. The points, the lines and all the \" abstract \" geometrical entities are defined by suitable abstraction processes. As a matter of fact, as observed in Casati and Varzi 1997, the approach proposed in these books is a basis for a \"mereology\" (i.e. an investigation about the part-whole relation) rather than for a point-free geometry. Indeed , the inclusion relation is set-theoretical and not topological in nature and this generates several difficulties. As an example, the definition of point is unsatisfactory (see Section 6). So, it is not surprising that some years later the publication of PNK and CN, Whitehead in PR proposed a different approach in which the primitive notion is the one of connection relation. This idea was suggested in de Laguna 1922. The aim of this paper is not to give a precise account of geometrical ideas contained in these books but only to emphasize their mathematical potentialities. So, we translate the analysis of Whitehead into suitable first order theories and we examine these theories from a logical point of view. Also, we argue that multi-valued logic is a promising tool to reformulate the approach in PNK and CN.",
"title": ""
},
{
"docid": "fdc01b87195272f8dec8ed32dfe8e664",
"text": "Future search engines are expected to deliver pro and con arguments in response to queries on controversial topics. While argument mining is now in the focus of research, the question of how to retrieve the relevant arguments remains open. This paper proposes a radical model to assess relevance objectively at web scale: the relevance of an argument’s conclusion is decided by what other arguments reuse it as a premise. We build an argument graph for this model that we analyze with a recursive weighting scheme, adapting key ideas of PageRank. In experiments on a large ground-truth argument graph, the resulting relevance scores correlate with human average judgments. We outline what natural language challenges must be faced at web scale in order to stepwise bring argument relevance to web search engines.",
"title": ""
},
{
"docid": "53a67740e444b5951bc6ab257236996e",
"text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.",
"title": ""
},
{
"docid": "f0813fe6b6324e1056dc19a5259d9538",
"text": "Plant disease detection is emerging field in India as agriculture is important sector in Economy and Social life. Earlier unscientific methods were in existence. Gradually with technical and scientific advancement, more reliable methods through lowest turnaround time are developed and proposed for early detection of plant disease. Such techniques are widely used and proved beneficial to farmers as detection of plant disease is possible with minimal time span and corrective actions are carried out at appropriate time. In this paper, we studied and evaluated existing techniques for detection of plant diseases to get clear outlook about the techniques and methodologies followed. The detection of plant disease is significantly based on type of family plants and same is carried out in two phases as segmentation and classification. Here, we have discussed existing segmentation method along with classifiers for detection of diseases in Monocot and Dicot family plant.",
"title": ""
},
{
"docid": "4c607b142149504c2edad475d5613b86",
"text": "This study uses a metatriangulation approach to explore the relationships between power and information technology impacts, development or deployment, and management or use in a sample Jasperson et al./Power & IT Research 398 MIS Quarterly Vol. 26 No. 4/December 2002 of 82 articles from 12 management and MIS journals published between 1980 and 1999. We explore the multiple paradigms underlying this research by applying two sets of lenses to examine the major findings from our sample. The technological imperative, organizational imperative , and emergent perspectives (Markus and Robey 1988) are used as one set of lenses to better understand researchers' views regarding the causal structure between IT and organizational power. A second set of lenses, which includes the rational, pluralist, interpretive, and radical perspectives (Bradshaw-Camball and Murray 1991), is used to focus on researchers' views of the role of power and different IT outcomes. We apply each lens separately to describe patterns emerging from the previous power and IT studies. In addition, we discuss the similarities and differences that occur when the two sets of lenses are simultaneously applied. We draw from this discussion to develop metaconjectures, (i.e., propositions that can be interpreted from multiple perspectives), and to suggest guidelines for studying power in future research.",
"title": ""
},
{
"docid": "497678769826087f81d2a7a00b0bbb79",
"text": "tRNAScan-SE is a tRNA detection program that is widely used for tRNA annotation; however, the false positive rate of tRNAScan-SE is unacceptable for large sequences. Here, we used a machine learning method to try to improve the tRNAScan-SE results. A new predictor, tRNA-Predict, was designed. We obtained real and pseudo-tRNA sequences as training data sets using tRNAScan-SE and constructed three different tRNA feature sets. We then set up an ensemble classifier, LibMutil, to predict tRNAs from the training data. The positive data set of 623 tRNA sequences was obtained from tRNAdb 2009 and the negative data set was the false positive tRNAs predicted by tRNAscan-SE. Our in silico experiments revealed a prediction accuracy rate of 95.1 % for tRNA-Predict using 10-fold cross-validation. tRNA-Predict was developed to distinguish functional tRNAs from pseudo-tRNAs rather than to predict tRNAs from a genome-wide scan. However, tRNA-Predict can work with the output of tRNAscan-SE, which is a genome-wide scanning method, to improve the tRNAscan-SE annotation results. The tRNA-Predict web server is accessible at http://datamining.xmu.edu.cn/∼gjs/tRNA-Predict.",
"title": ""
},
{
"docid": "cdf78bab8d93eda7ccbb41674d24b1a2",
"text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.",
"title": ""
},
{
"docid": "81c2fca06af30c27e74267dbccd84080",
"text": "Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.",
"title": ""
},
{
"docid": "800befb527094bc6169809c6765d5d15",
"text": "The problem of scheduling a weighted directed acyclic graph (DAG) to a set of homogeneous processors to minimize the completion time has been extensively studied. The NPcompleteness of the problem has instigated researchers to propose a myriad of heuristic algorithms. While these algorithms are individually reported to be efficient, it is not clear how effective they are and how well they compare against each other. A comprehensive performance evaluation and comparison of these algorithms entails addressing a number of difficult issues. One of the issues is that a large number of scheduling algorithms are based upon radically different assumptions, making their comparison on a unified basis a rather intricate task. Another issue is that there is no standard set of benchmarks that can be used to evaluate and compare these algorithms. Furthermore, most algorithms are evaluated using small problem sizes, and it is not clear how their performance scales with the problem size. In this paper, we first provide a taxonomy for classifying various algorithms into different categories according to their assumptions and functionalities. We then propose a set of benchmarks which are of diverse structures without being biased towards a particular scheduling technique and still allow variations in important parameters. We have evaluated 15 scheduling algorithms, and compared them using the proposed benchmarks. Based upon the design philosophies and principles behind these algorithms, we interpret the results and discuss why some algorithms perform better than the others.",
"title": ""
},
{
"docid": "354500ae7e1ad1c6fd09438b26e70cb0",
"text": "Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.",
"title": ""
},
{
"docid": "f59a7b518f5941cd42086dc2fe58fcea",
"text": "This paper contributes a novel algorithm for effective and computationally efficient multilabel classification in domains with large label sets L. The HOMER algorithm constructs a Hierarchy Of Multilabel classifiERs, each one dealing with a much smaller set of labels compared to L and a more balanced example distribution. This leads to improved predictive performance along with linear training and logarithmic testing complexities with respect to |L|. Label distribution from parent to children nodes is achieved via a new balanced clustering algorithm, called balanced k means.",
"title": ""
},
{
"docid": "2894570c1e8770874361943e17b13def",
"text": "OBJECTIVES:Previous studies have suggested an association between cytomegalovirus (CMV) infection and steroid-refractory inflammatory bowel disease. In this study, the use of CMV DNA load during acute flare-ups of ulcerative colitis (UC) to predict resistance to immunosuppressive therapy was evaluated in intestinal tissue.METHODS:Forty-two consecutive patients (sex ratio M/F: 0.9, mean age: 43.6 years) hospitalized for moderate to severe UC and treated with IV steroids were included prospectively. A colonoscopy was performed for each patient at inclusion; colonic biopsy samples of the pathological tissue, and if possible, of the healthy mucosa, were tested for histological analysis and determination of CMV DNA load by real-time polymerase chain reaction assay. Patients were treated as recommended by the current guidelines.RESULTS:Sixteen patients were found positive for CMV DNA in inflamed intestinal tissue but negative in endoscopically healthy tissue; all of these patients were positive for anti-CMV IgG, three exhibited CMV DNA in blood, and none was positive for intestinal CMV antigen by immunohistochemistry detection. In the 26 remaining patients, no stigmata of recent CMV infection were recorded by any technique. By multivariate analysis, the only factor associated with CMV DNA in inflammatory tissue was the resistance to steroids or to three lines of treatment (risk ratio: 4.7; 95% confidence interval: 1.2–22.5). A CMV DNA load above 250 copies/mg in tissue was predictive of resistance to three successive regimens (likelihood ratio+=4.33; area under the receiver-operating characteristic curve=0.85). Eight UC patients with CMV DNA in inflamed tissue and therapeutic failure received ganciclovir; a clinical remission was observed in seven cases, with a sustained response in five of them.CONCLUSIONS:The CMV DNA load determined in inflamed intestinal tissue predicts resistance to steroid treatment and to three drug regimens in UC. Initiation of an early antiviral treatment in these patients might delay the occurrence of resistance to current treatments.",
"title": ""
},
{
"docid": "60d21d395c472eb36bdfd014c53d918a",
"text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.",
"title": ""
}
] |
scidocsrr
|
ded77790e98f59b9b4512517625e0edf
|
Evolving Intelligent Mario Controller by Reinforcement Learning
|
[
{
"docid": "47c004e7bc150685dafefcbb79f25657",
"text": "REALM is a rule-based evolutionary computation agent for playing a modified version of Super Mario Bros. according to the rules stipulated in the Mario AI Competition held in the 2010 IEEE Symposium on Computational Intelligence and Games. Two alternate representations for the REALM rule sets are reported here, in both hand-coded and learned versions. Results indicate that the second version, with an abstracted action set, tends to perform better overall, but the first version shows a steeper learning curve. In both cases, learning quickly surpasses the hand-coded rule sets.",
"title": ""
}
] |
[
{
"docid": "81f504c4e378d0952231565d3ba4c555",
"text": "The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.",
"title": ""
},
{
"docid": "f720554ba9cff8bec781f4ad2ec538aa",
"text": "English. Hate speech is prevalent in social media platforms. Systems that can automatically detect offensive content are of great value to assist human curators with removal of hateful language. In this paper, we present machine learning models developed at UW Tacoma for detection of misogyny, i.e. hate speech against women, in English tweets, and the results obtained with these models in the shared task for Automatic Misogyny Identification (AMI) at EVALITA2018. Italiano. Commenti offensivi nei confronti di persone con diversa orientazione sessuale o provenienza sociale sono oggigiorno prevalenti nelle piattaforme di social media. A tale fine, sistemi automatici in grado di rilevare contenuti offensivi nei confronti di alcuni gruppi sociali sono importanti per facilitare il lavoro dei moderatori di queste piattaforme a rimuovere ogni commento offensivo usato nei social media. In questo articolo, vi presentiamo sia dei modelli di apprendimento automatico sviluppati all’Università di Washington in Tacoma per il rilevamento della misoginia, ovvero discorsi offensivi usati nei tweet in lingua inglese contro le donne, sia i risultati ottenuti con questi modelli nel processo per l’identificazione automatica della misoginia in EVALITA2018.",
"title": ""
},
{
"docid": "70becc434885af8f59ad39a3cedc8b6d",
"text": "The trajectory of the heel and toe during the swing phase of human gait were analyzed on young adults. The magnitude and variability of minimum toe clearance and heel-contact velocity were documented on 10 repeat walking trials on 11 subjects. The energetics that controlled step length resulted from a separate study of 55 walking trials conducted on subjects walking at slow, natural, and fast cadences. A sensitivity analysis of the toe clearance and heel-contact velocity measures revealed the individual changes at each joint in the link-segment chain that could be responsible for changes in those measures. Toe clearance was very small (1.29 cm) and had low variability (about 4 mm). Heel-contact velocity was negligible vertically and small (0.87 m/s) horizontally. Six joints in the link-segment chain could, with very small changes (+/- 0.86 degrees - +/- 3.3 degrees), independently account for toe clearance variability. Only one muscle group in the chain (swing-phase hamstring muscles) could be responsible for altering the heel-contact velocity prior to heel contact. Four mechanical power phases in gait (ankle push-off, hip pull-off, knee extensor eccentric power at push-off, and knee flexor eccentric power prior to heel contact) could alter step length and cadence. These analyses demonstrate that the safe trajectory of the foot during swing is a precise endpoint control task that is under the multisegment motor control of both the stance and swing limbs.",
"title": ""
},
{
"docid": "2915218bc86d049d6b8e3a844a9768fd",
"text": "Power and energy systems are on the verge of a profound change where Smart Grid solutions will enhance their efficiency and flexibility. Advanced ICT and control systems are key elements of the Smart Grid to enable efficient integration of a high amount of renewable energy resources which in turn are seen as key elements of the future energy system. The corresponding distribution grids have to become more flexible and adaptable as the current ones in order to cope with the upcoming high share of energy from distributed renewable sources. The complexity of Smart Grids requires to consider and imply many components when a new application is designed. However, a holistic ICT-based approach for modelling, designing and validating Smart Grid developments is missing today. The goal of this paper therefore is to discuss an advanced design approach and the corresponding information model, covering system, application, control and communication aspects of Smart Grids.",
"title": ""
},
{
"docid": "717ea3390ffe3f3132d4e2230e645ee5",
"text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.",
"title": ""
},
{
"docid": "bff6e87727db20562091a6c8c08f3667",
"text": "Many trust-aware recommender systems have explored the value of explicit trust, which is specified by users with binary values and simply treated as a concept with a single aspect. However, in social science, trust is known as a complex term with multiple facets, which have not been well exploited in prior recommender systems. In this paper, we attempt to address this issue by proposing a (dis)trust framework with considerations of both interpersonal and impersonal aspects of trust and distrust. Specifically, four interpersonal aspects (benevolence, competence, integrity and predictability) are computationally modelled based on users’ historic ratings, while impersonal aspects are formulated from the perspective of user connections in trust networks. Two logistic regression models are developed and trained by accommodating these factors, and then applied to predict continuous values of users’ trust and distrust, respectively. Trust information is further refined by corresponding predicted distrust information. The experimental results on real-world data sets demonstrate the effectiveness of our proposed model in further improving the performance of existing state-of-the-art trust-aware recommendation approaches.",
"title": ""
},
{
"docid": "acecf40720fd293972555918878b805e",
"text": "This article outlines a number of important research issues in human-computer interaction in the e-commerce environment. It highlights some of the challenges faced by users in browsing Web sites and conducting searches for information, and suggests several areas of research for promoting ease of navigation and search. Also, it discusses the importance of trust in the online environment, describing some of the antecedents and consequences of trust, and provides guidelines for integrating trust into Web site design. The issues discussed in this article are presented under three broad categories of human-computer interaction – Web usability, interface design, and trust – and are intended to highlight what we believe are worthwhile areas for future research in e-commerce.",
"title": ""
},
{
"docid": "54c6038cf2cfe9856c15fd6514e6ad9d",
"text": "In this paper we examine an alternative interface for phonetic search, namely query-by-example, that avoids OOV issues associated with both standard word-based and phonetic search methods. We develop three methods that compare query lattices derived from example audio against a standard ngrambased phonetic index and we analyze factors affecting the performance of these systems. We show that the best systems under this paradigm are able to achieve 77% precision when retrieving utterances from conversational telephone speech and returning 10 results from a single query (performance that is better than a similar dictionary-based approach) suggesting significant utility for applications requiring high precision. We also show that these systems can be further improved using relevance feedback: By incorporating four additional queries the precision of the best system can be improved by 13.7% relative. Our systems perform well despite high phone recognition error rates (> 40%) and make use of no pronunciation or letter-to-sound resources.",
"title": ""
},
{
"docid": "6570f9b4f8db85f40a99fb1911aa4967",
"text": "Honey bees have played a major role in the history and development of humankind, in particular for nutrition and agriculture. The most important role of the western honey bee (Apis mellifera) is that of pollination. A large amount of crops consumed throughout the world today are pollinated by the activity of the honey bee. It is estimated that the total value of these crops stands at 155 billion euro annually. The goal of the work outlined in this paper was to use wireless sensor network technology to monitor a colony within the beehive with the aim of collecting image and audio data. These data allows the beekeeper to obtain a much more comprehensive view of the in-hive conditions, an indication of flight direction, as well as monitoring the hive outside of the traditional beekeeping times, i.e. during the night, poor weather, and winter months. This paper outlines the design of a fully autonomous beehive monitoring system which provided image and sound monitoring of the internal chambers of the hive, as well as a warning system for emergency events such as possible piping, dramatically increased hive activity, or physical damage to the hive. The final design included three wireless nodes: a digital infrared camera with processing capabilities for collecting imagery of the hive interior; an external thermal imaging camera node for monitoring the colony status and activity, and an accelerometer and a microphone connected to an off the shelf microcontroller node for processing. The system allows complex analysis and sensor fusion. Some scenarios based on sound processing, image collection, and accelerometers are presented. Power management was implemented which allowed the system to achieve energy neutrality in an outdoor deployment with a 525 × 345 mm solar panel.",
"title": ""
},
{
"docid": "2ff15076533d1065209e0e62776eaa69",
"text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high",
"title": ""
},
{
"docid": "1fa056e87c10811b38277d161c81c2ac",
"text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.",
"title": ""
},
{
"docid": "b24e5a512306f24568f3e21af08a1faf",
"text": "We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300×300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512×512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.",
"title": ""
},
{
"docid": "f80dedfb0d0f7e5ba068e582517ac6f8",
"text": "We present a physically-based approach to grasping and manipulation of virtual objects that produces visually realistic results, addresses the problem of visual interpenetration of hand and object models, and performs force rendering for force-feedback gloves in a single framework. Our approach couples tracked hand configuration to a simulation-controlled articulated hand model using a system of linear and torsional spring-dampers. We discuss an implementation of our approach that uses a widely-available simulation tool for collision detection and response. We illustrate the resulting behavior of the virtual hand model and of grasped objects, and we show that the simulation rate is sufficient for control of current force-feedback glove designs. We also present a prototype of a system we are developing to support natural whole-hand interactions in a desktop-sized workspace.",
"title": ""
},
{
"docid": "b55d5967005d3b59063ffc4fd7eeb59a",
"text": "In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.",
"title": ""
},
{
"docid": "805ff3489d9bc145a0a8b91ce58ce3f9",
"text": "The present experiment was designed to test the theory that psychological procedures achieve changes in behavior by altering the level and strength of self-efficacy. In this formulation, perceived self-efficacy. In this formulation, perceived self-efficacy influences level of performance by enhancing intensity and persistence of effort. Adult phobics were administered treatments based upon either performance mastery experiences, vicarious experiences., or they received no treatment. Their efficacy expectations and approach behavior toward threats differing on a similarity dimension were measured before and after treatment. In accord with our prediction, the mastery-based treatment produced higher, stronger, and more generalized expectations of personal efficacy than did the treatment relying solely upon vicarious experiences. Results of a microanalysis further confirm the hypothesized relationship between self-efficacy and behavioral change. Self-efficacy was a uniformly accurate predictor of performance on tasks of varying difficulty with different threats regardless of whether the changes in self-efficacy were produced through enactive mastery or by vicarious experience alone.",
"title": ""
},
{
"docid": "fd5a586adf75dfc33171e077ecd039bb",
"text": "An overview is presented of the medical image processing literature on mutual-information-based registration. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different aspects of mutual-information-based registration. The main division is in aspects of the methodology and of the application. The part on methodology describes choices made on facets such as preprocessing of images, gray value interpolation, optimization, adaptations to the mutual information measure, and different types of geometrical transformations. The part on applications is a reference of the literature available on different modalities, on interpatient registration and on different anatomical objects. Comparison studies including mutual information are also considered. The paper starts with a description of entropy and mutual information and it closes with a discussion on past achievements and some future challenges.",
"title": ""
},
{
"docid": "474c4531ff58348d001320b824d626d6",
"text": "As it becomes ever more pervasively engaged in data driven commerce, a modern enterprise becomes increasingly dependent upon reliable and high speed transaction services. At the same time it aspires to capitalize upon large inflows of information to draw timely business insights and improve business results. These two imperatives are frequently in conflict because of the widely divergent strategies that must be pursued: the need to bolster on-line transactional processing generally drives a business towards a small cluster of high-end servers running a mature, ACID compliant, SQL relational database, while high throughput analytics on massive and growing volumes of data favor the selection of very large clusters running non-traditional (NoSQL/NewSQL) databases that employ softer consistency protocols for performance and availability. This paper describes an approach in which the two imperatives are addressed by blending the two types (scale-up and scale-out) of data processing. It breaks down data growth that enterprises experience into three classes-Chronological, Horizontal, and Vertical, and picks out different approaches for blending SQL and NewSQL platforms for each class. To simplify application logic that must comprehend both types of data platforms, the paper describes two new capabilities: (a) a data integrator to quickly sift out updates that happen in an RDBMS and funnel them into a NewSQL database, and (b) extensions to the Hibernate-OGM framework that reduce the programming sophistication required for integrating HBase and Hive back ends with application logic designed for relational front ends. Finally the paper details several instances in which these approaches have been applied in real-world, at a number of software vendors with whom the authors have collaborated on design, implementation and deployment of blended solutions.",
"title": ""
},
{
"docid": "eb2663865d0d7312641e0748978b238c",
"text": "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.",
"title": ""
},
{
"docid": "3171893b6863e777141160c65f1b9616",
"text": "This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.",
"title": ""
},
{
"docid": "a8c7f588e4eb45e4a9be13c09abbf3eb",
"text": "In this paper, a novel planar bandpass filter is proposed, designed, and implemented with a hybrid structure of substrate integrated waveguide (SIW) and coplanar waveguide (CPW), which has the advantages of good passband and stopband performance inherited from SIW and miniaturized size accompanying with the CPW. Additional design flexibility is introduced by the hybrid structure for efficiently controlling the mixed electric and magnetic coupling, and then planar bandpass filters with controllable transmission zeros and quasi-elliptic response can be achieved. Several prototypes with single and dual SIW cavities are fabricated. The measured results verified the performance of the proposed planar bandpass filters, such as low passband insertion loss, sharp roll-off characteristics at transition band, etc.",
"title": ""
}
] |
scidocsrr
|
a160aeded508c7c8df01bc8aa16d837d
|
Security analysis of the Internet of Things: A systematic literature review
|
[
{
"docid": "c381fdacde35fce7c8b869d512364a4f",
"text": "IoT (Internet of Things) diversifies the future Internet, and has drawn much attention. As more and more gadgets (i.e. Things) connected to the Internet, the huge amount of data exchanged has reached an unprecedented level. As sensitive and private information exchanged between things, privacy becomes a major concern. Among many important issues, scalability, transparency, and reliability are considered as new challenges that differentiate IoT from the conventional Internet. In this paper, we enumerate the IoT communication scenarios and investigate the threats to the large-scale, unreliable, pervasive computing environment. To cope with these new challenges, the conventional security architecture will be revisited. In particular, various authentication schemes will be evaluated to ensure the confidentiality and integrity of the exchanged data.",
"title": ""
},
{
"docid": "0d81a7af3c94e054841e12d4364b448c",
"text": "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research. During the last decade, Internet of Things (IoT) approached our lives silently and gradually, thanks to the availability of wireless communication systems (e.g., RFID, WiFi, 4G, IEEE 802.15.x), which have been increasingly employed as technology driver for crucial smart monitoring and control applications [1–3]. Nowadays, the concept of IoT is many-folded, it embraces many different technologies, services, and standards and it is widely perceived as the angular stone of the ICT market in the next ten years, at least [4–6]. From a logical viewpoint, an IoT system can be depicted as a collection of smart devices that interact on a collabo-rative basis to fulfill a common goal. At the technological floor, IoT deployments may adopt different processing and communication architectures, technologies, and design methodologies, based on their target. For instance, the same IoT system could leverage the capabilities of a wireless sensor network (WSN) that collects the environmental information in a given area and a set of smartphones on top of which monitoring applications run. In the middle, a standardized or proprietary middle-ware could be employed to ease the access to virtualized resources and services. The middleware, in turn, might be implemented using cloud technologies, centralized overlays , or peer to peer systems [7]. Of course, this high level of heterogeneity, coupled to the wide scale of IoT systems, is expected to magnify security threats of the current Internet, which is being increasingly used to let interact humans, machines, and robots, in any combination. More in details, traditional security countermeasures and privacy enforcement cannot be directly applied to IoT technologies due to …",
"title": ""
},
{
"docid": "62218093e4d3bf81b23512043fc7a013",
"text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.",
"title": ""
}
] |
[
{
"docid": "5475df204bca627e73b077594af29d47",
"text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.",
"title": ""
},
{
"docid": "96e56dcf3d38c8282b5fc5c8ae747a66",
"text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.",
"title": ""
},
{
"docid": "f90efcef80233888fb8c218d1e5365a6",
"text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.",
"title": ""
},
{
"docid": "121d3572c5a60a66da6bb42d0f7bf1af",
"text": "The present study examined the relationships among grit, academic performance, perceived academic failure, and stress levels of Hong Kong associate degree students using path analysis. Three hundred and forty-five students from a community college in Hong Kong voluntarily participated in the study. They completed a questionnaire that measured their grit (operationalized as interest and perseverance) and stress levels. The students also provided their actual academic performance and evaluated their perception of their academic performance as a success or a failure. The results of the path analysis showed that interest and perseverance were negatively associated with stress, and only perceived academic failure was positively associated with stress. These findings suggest that psychological appraisal and resources are more important antecedents of stress than objective negative events. Therefore, fostering students' psychological resilience may alleviate the stress experienced by associate degree students or college students in general.",
"title": ""
},
{
"docid": "d9888d448df6329e9a9b4fb5c1385ee3",
"text": "Designing and developing a comfortable and convenient EEG system for daily usage that can provide reliable and robust EEG signal, encompasses a number of challenges. Among them, the most ambitious is the reduction of artifacts due to body movements. This paper studies the effect of head movement artifacts on the EEG signal and on the dry electrode-tissue impedance (ETI), monitored continuously using the imec's wireless EEG headset. We have shown that motion artifacts have huge impact on the EEG spectral content in the frequency range lower than 20Hz. Coherence and spectral analysis revealed that ETI is not capable of describing disturbances at very low frequencies (below 2Hz). Therefore, we devised a motion artifact reduction (MAR) method that uses a combination of a band-pass filtering and multi-channel adaptive filtering (AF), suitable for real-time MAR. This method was capable of substantially reducing artifacts produced by head movements.",
"title": ""
},
{
"docid": "f717225fa7518383e0db362e673b9af4",
"text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "c15618df21bce45cbad6766326de3dbd",
"text": "The birth of intersexed infants, babies born with genitals that are neither clearly male nor clearly female, has been documented throughout recorded time.' In the late twentieth century, medical technology has advanced to allow scientists to determine chromosomal and hormonal gender, which is typically taken to be the real, natural, biological gender, usually referred to as \"sex.\"2 Nevertheless, physicians who handle the cases of intersexed infants consider several factors beside biological ones in determining, assigning, and announcing the gender of a particular infant. Indeed, biological factors are often preempted in their deliberations by such cultural factors as the \"correct\" length of the penis and capacity of the vagina.",
"title": ""
},
{
"docid": "0e883a8ff7ccf82f1849d801754a5363",
"text": "The purpose of this study was to investigate the structural relationships among students' expectation, perceived enjoyment, perceived usefulness, satisfaction, and continuance intention to use digital textbooks in middle school, based on Bhattacherjee's (2001) expectation-confirmation model. The subjects of this study were Korean middle school students taking an English class taught by a digital textbook in E middle school, Seoul. Data were collected via a paper-and-pencil-based questionnaire with 17 items; 137 responses were analyzed. The study found that (a) the more expectations of digital textbooks are satisfied, the more likely students are to perceive enjoyment and usefulness of digital textbooks, (b) satisfaction plays a mediating role in linking expectation, perceived enjoyment and usefulness, and continuance intention to use digital textbooks, (c) perceived usefulness and satisfaction have a direct and positive influence on continuance intention to use digital textbooks, and (d) perceived enjoyment has a non-significant influence on continuance intention to use digital textbooks with middle school students. Based on these findings, the implications and recommendations for future research are presented. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7f51bdc05c4a1bf610f77b629d8602f7",
"text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.",
"title": ""
},
{
"docid": "5d377a17d3444d6137be582cbbc6c1db",
"text": "Next generation malware will by be characterized by the intense use of polymorphic and metamorphic techniques aimed at circumventing the current malware detectors, based on pattern matching. In order to deal with this new kind of threat novel techniques have to be devised for the realization of malware detectors. Recent papers started to address such issue and this paper represents a further contribution in such a field. More precisely in this paper we propose a strategy for the detection of malicious codes that adopt the most evolved self-mutation techniques; we also provide experimental data supporting the validity of",
"title": ""
},
{
"docid": "31f5c712760d1733acb0d7ffd3cec6ad",
"text": "Singular Spectrum Transform (SST) is a fundamental subspace analysis technique which has been widely adopted for solving change-point detection (CPD) problems in information security applications. However, the performance of a SST based CPD algorithm is limited to the lack of robustness to corrupted observations with large noises in practice. Based on the observation that large noises in practical time series are generally sparse, in this paper, we study a combination of Robust Principal Component Analysis (RPCA) and SST to obtain a robust CPD algorithm dealing with sparse large noises. The sparse large noises are to be eliminated from observation trajectory matrices by performing a low-rank matrix recovery procedure of RPCA. The noise-eliminated matrices are then used to extract SST subspaces for CPD. The effectiveness of the proposed method is demonstrated through experiments based on both synthetic and real-world datasets. Experimental results show that the proposed method outperforms the competing state-of-the-arts in terms of detection accuracy for time series with sparse large noises.",
"title": ""
},
{
"docid": "8dd2eaece835686b73683f263428ecfa",
"text": "Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular. Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a “Learning By Observation” (LBO) approach where we identify, segment, and parameterize motion sequences and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure. We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom; and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments with repeatability of 96% for 50 trials of the 3d-DVTP subtask and 70% for 20 trials of the 2d-PCOTP subtask. A video is available at: http://j.mp/Robot-Surgery-Video-Oct-2014.",
"title": ""
},
{
"docid": "fa9abc74d3126e0822e7e815e135e845",
"text": "Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.",
"title": ""
},
{
"docid": "3d28f86795ddcd249657703cbedf87b1",
"text": "A 2.5V high precision BiCMOS bandgap reference with supply voltage range of 6V to 18V was proposed and realized. It could be applied to lots of Power Management ICs (Intergrated Circuits) due the high voltage. By introducing a preregulated current source, the PSRR (Power Supply Rejection Ratio) of 103dB at low frequency and the line regulation of 26.7μV/V was achieved under 15V supply voltage at ambient temperature of 27oC. Moreover, if the proper resistance trimming is implemented, the temperature coefficient could be reduced to less than 16.4ppm/oC. The start up time of the reference voltage could also be decreased with an additional bipolar and capacitor.",
"title": ""
},
{
"docid": "3ec63f1c1f74c5d11eaa9d360ceaac55",
"text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.",
"title": ""
},
{
"docid": "bb240f2e536e5e5cd80fcca8c9d98171",
"text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.",
"title": ""
},
{
"docid": "4cda02d9f5b5b16773b8cbffc54e91ca",
"text": "We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.",
"title": ""
},
{
"docid": "7bef0f8e1df99d525f3d2356bd129e45",
"text": "The term 'participation' is traditionally used in HCI to describe the involvement of users and stakeholders in design processes, with a pretext of distributing control to participants to shape their technological future. In this paper we ask whether these values can hold up in practice, particularly as participation takes on new meanings and incorporates new perspectives. We argue that much HCI research leans towards configuring participation. In exploring this claim we explore three questions that we consider important for understanding how HCI configures participation; Who initiates, directs and benefits from user participation in design? In what forms does user participation occur? How is control shared with users in design? In answering these questions we consider the conceptual, ethical and pragmatic problems this raises for current participatory HCI research. Finally, we offer directions for future work explicitly dealing with the configuration of participation.",
"title": ""
},
{
"docid": "60e16b0c5bff9f7153c64a38193b8759",
"text": "The “Flash Crash” of May 6th, 2010 comprised an unprecedented 1,000 point, five-minute decline in the Dow Jones Industrial Average that was followed by a rapid, disorderly recovery of prices. We illuminate the causes of this singular event with the first analysis that tracks the full order book activity at millisecond granularity. We document previously overlooked market data anomalies and establish that these anomalies Granger-caused liquidity withdrawal. We offer a simulation model that formalizes the process by which large sell orders, combined with widespread liquidity withdrawal, can generate Flash Crash-like events in the absence of fundamental information arrival. ∗This work was supported by the Hellman Fellows Fund and the Rock Center for Corporate Governance at Stanford University. †Email: ealdrich@ucsc.edu. ‡Email: grundfest@stanford.edu §Email: gregory.laughlin@yale.edu",
"title": ""
},
{
"docid": "aab6a2166b9d39a67ec9ebb127f0956a",
"text": "A heuristic approximation algorithm that can optimise the order of firewall rules to minimise packet matching is presented. It has been noted that firewall operators tend to make use of the fact that some firewall rules match most of the traffic, and conversely that others match little of the traffic. Consequently, ordering the rules such that the highest matched rules are as high in the table as possible reduces the processing load in the firewall. Due to dependencies between rules in the rule set this problem, optimising the cost of the packet matching process, has been shown to be NP-hard. This paper proposes an algorithm that is designed to give good performance in terms of minimising the packet matching cost of the firewall. The performance of the algorithm is related to complexity of the firewall rule set and is compared to an alternative algorithm demonstrating that the algorithm here has improved the packet matching cost in all cases.",
"title": ""
}
] |
scidocsrr
|
d1b33ce49666fa755a6cd629a1faaf25
|
Simplified modeling and identification approach for model-based control of parallel mechanism robot leg
|
[
{
"docid": "69e381983f7af393ee4bbb62bb587a4e",
"text": "This paper presents the design principles for highly efficient legged robots, the implementation of the principles in the design of the MIT Cheetah, and the analysis of the high-speed trotting experimental results. The design principles were derived by analyzing three major energy-loss mechanisms in locomotion: heat losses from the actuators, friction losses in transmission, and the interaction losses caused by the interface between the system and the environment. Four design principles that minimize these losses are discussed: employment of high torque-density motors, energy regenerative electronic system, low loss transmission, and a low leg inertia. These principles were implemented in the design of the MIT Cheetah; the major design features are large gap diameter motors, regenerative electric motor drivers, single-stage low gear transmission, dual coaxial motors with composite legs, and the differential actuated spine. The experimental results of fast trotting are presented; the 33-kg robot runs at 22 km/h (6 m/s). The total power consumption from the battery pack was 973 W and resulted in a total cost of transport of 0.5, which rivals running animals' at the same scale. 76% of the total energy consumption is attributed to heat loss from the motor, and the remaining 24% is used in mechanical work, which is dissipated as interaction loss as well as friction losses at the joint and transmission.",
"title": ""
}
] |
[
{
"docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc",
"text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.",
"title": ""
},
{
"docid": "c4256017c214eabda8e5b47c604e0e49",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "386af0520255ebd048cff30961973624",
"text": "We present a linear optical receiver realized on 130 nm SiGe BiCMOS. Error-free operation assuming FEC is shown at bitrates up to 64 Gb/s (32 Gbaud) with 165mW power consumption, corresponding to 2.578 pJ/bit.",
"title": ""
},
{
"docid": "d52bfde050e6535645c324e7006a50e7",
"text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.",
"title": ""
},
{
"docid": "ba87ca7a07065e25593e6ae5c173669d",
"text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.",
"title": ""
},
{
"docid": "51fec678a2e901fdf109d4836ef1bf34",
"text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "b5dc56272d4dea04b756a8614d6762c9",
"text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.",
"title": ""
},
{
"docid": "9500dfc92149c5a808cec89b140fc0c3",
"text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.",
"title": ""
},
{
"docid": "a1bf728c54cec3f621a54ed23a623300",
"text": "Machine learning algorithms are now common in the state-ofthe-art spoken language understanding models. But to reach good performance they must be trained on a potentially large amount of data which are not available for a variety of tasks and languages of interest. In this work, we present a novel zero-shot learning method, based on word embeddings, allowing to derive a full semantic parser for spoken language understanding. No annotated in-context data are needed, the ontological description of the target domain and generic word embedding features (learned from freely available general domain data) suffice to derive the model. Two versions are studied with respect to how the model parameters and decoding step are handled, including an extension of the proposed approach in the context of conditional random fields. We show that this model, with very little supervision, can reach instantly performance comparable to those obtained by either state-of-the-art carefully handcrafted rule-based or trained statistical models for extraction of dialog acts on the Dialog State Tracking test datasets (DSTC2 and 3).",
"title": ""
},
{
"docid": "9941cd183e2c7b79d685e0e9cef3c43e",
"text": "We present a novel recursive Bayesian method in the DFT-domain to address the multichannel acoustic echo cancellation problem. We model the echo paths between the loudspeakers and the near-end microphone as a multichannel random variable with a first-order Markov property. The incorporation of the near-end observation noise, in conjunction with the multichannel Markov model, leads to a multichannel state-space model. We derive a recursive Bayesian solution to the multichannel state-space model, which turns out to be well suited for input signals that are not only auto-correlated but also cross-correlated. We show that the resulting multichannel state-space frequency-domain adaptive filter (MCSSFDAF) can be efficiently implemented due to the submatrix-diagonality of the state-error covariance. The filter offers optimal tracking and robust adaptation in the presence of near-end noise and echo path variability.",
"title": ""
},
{
"docid": "433e7a8c4d4a16f562f9ae112102526e",
"text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.",
"title": ""
},
{
"docid": "7c13132ef5b2d67c4a7e3039db252302",
"text": "Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses’ revenue, even 0.1% of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9% AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training.",
"title": ""
},
{
"docid": "1d3007738c259cdf08f515849c7939b8",
"text": "Background: With an increase in the number of disciplines contributing to health literacy scholarship, we sought to explore the nature of interdisciplinary research in the field. Objective: This study sought to describe disciplines that contribute to health literacy research and to quantify how disciplines draw from and contribute to an interdisciplinary evidence base, as measured by citation networks. Methods: We conducted a literature search for health literacy articles published between 1991 and 2015 in four bibliographic databases, producing 6,229 unique bibliographic records. We employed a scientometric tool (CiteSpace [Version 4.4.R1]) to quantify patterns in published health literacy research, including a visual path from cited discipline domains to citing discipline domains. Key Results: The number of health literacy publications increased each year between 1991 and 2015. Two spikes, in 2008 and 2013, correspond to the introduction of additional subject categories, including information science and communication. Two journals have been cited more than 2,000 times—the Journal of General Internal Medicine (n = 2,432) and Patient Education and Counseling (n = 2,252). The most recently cited journal added to the top 10 list of cited journals is the Journal of Health Communication (n = 989). Three main citation paths exist in the health literacy data set. Articles from the domain “medicine, medical, clinical” heavily cite from one domain (health, nursing, medicine), whereas articles from the domain “psychology, education, health” cite from two separate domains (health, nursing, medicine and psychology, education, social). Conclusions: Recent spikes in the number of published health literacy articles have been spurred by a greater diversity of disciplines contributing to the evidence base. However, despite the diversity of disciplines, citation paths indicate the presence of a few, self-contained disciplines contributing to most of the literature, suggesting a lack of interdisciplinary research. To address complex and evolving challenges in the health literacy field, interdisciplinary team science, that is, integrating science from across multiple disciplines, should continue to grow. [Health Literacy Research and Practice. 2017;1(4):e182-e191.] Plain Language Summary: The addition of diverse disciplines conducting health literacy scholarship has spurred recent spikes in the number of publications. However, citation paths suggest that interdisciplinary research can be strengthened. Findings directly align with the increasing emphasis on team science, and support opportunities and resources that incentivize interdisciplinary health literacy research. The study of health literacy has significantly expanded over the past decade. It represents a dynamic area of inquiry that extends to multiple disciplines. Health literacy emerged as a derivative of literacy and early definitions focused on the ability to read and understand medical instructions and health care information (Parker, Baker, Williams, & Nurss, 1995; Williams et al., 1995). This early work led to a body of research demonstrating that people with low health literacy generally had poorer health outcomes, including lower levels of screening and medication adherence rates (Baker,",
"title": ""
},
{
"docid": "cdc276a3c4305d6c7ba763332ae933cc",
"text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.",
"title": ""
},
{
"docid": "b52cadf9e20eebfd388c09c51cff2d74",
"text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.",
"title": ""
},
{
"docid": "6e0877f16e624bef547f76b80278f760",
"text": "The importance of storytelling as the foundation of human experiences cannot be overestimated. The oral traditions focus upon educating and transmitting knowledge and skills and also evolved into one of the earliest methods of communicating scientific discoveries and developments. A wide ranging search of the storytelling, education and health-related literature encompassing the years 1975-2007 was performed. Evidence from disparate elements of education and healthcare were used to inform an exploration of storytelling. This conceptual paper explores the principles of storytelling, evaluates the use of storytelling techniques in education in general, acknowledges the role of storytelling in healthcare delivery, identifies some of the skills learned and benefits derived from storytelling, and speculates upon the use of storytelling strategies in nurse education. Such stories have, until recently been harvested from the experiences of students and of educators, however, there is a growing realization that patients and service users are a rich source of healthcare-related stories that can affect, change and benefit clinical practice. The use of technology such as the Internet discussion boards or digitally-facilitated storytelling has an evolving role in ensuring that patient-generated and experiential stories have a future within nurse education.",
"title": ""
},
{
"docid": "64770c350dc1d260e24a43760d4e641b",
"text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.",
"title": ""
},
{
"docid": "76eef8117ac0bc5dbb0529477d10108d",
"text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).",
"title": ""
},
{
"docid": "32b96d4d23a03b1828f71496e017193e",
"text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.",
"title": ""
}
] |
scidocsrr
|
bf9b0467a5e1296d564a445a814627a9
|
Software-defined wireless network architectures for the Internet-of-Things
|
[
{
"docid": "5a83cb0ef928b6cae6ce1e0b21d47f60",
"text": "Software defined networking, characterized by a clear separation of the control and data planes, is being adopted as a novel paradigm for wired networking. With SDN, network operators can run their infrastructure more efficiently, supporting faster deployment of new services while enabling key features such as virtualization. In this article, we adopt an SDN-like approach applied to wireless mobile networks that will not only benefit from the same features as in the wired case, but will also leverage on the distinct features of mobile deployments to push improvements even further. We illustrate with a number of representative use cases the benefits of the adoption of the proposed architecture, which is detailed in terms of modules, interfaces, and high-level signaling. We also review the ongoing standardization efforts, and discuss the potential advantages and weaknesses, and the need for a coordinated approach.",
"title": ""
}
] |
[
{
"docid": "cf0b2ec813ac12c7cd3f3cbf7c133650",
"text": "Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present a vision, challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations, and devices power usage characteristics; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.",
"title": ""
},
{
"docid": "e465b9a38e7649f541ab9e419103b362",
"text": "Spoken language based intelligent assistants (IAs) have been developed for a number of domains but their functionality has mostly been confined to the scope of a given app. One reason is that it’s is difficult for IAs to infer a user’s intent without access to relevant context and unless explicitly implemented, context is not available across app boundaries. We describe context-aware multi-app dialog systems that can learn to 1) identify meaningful user intents; 2) produce natural language representation for the semantics of such intents; and 3) predict user intent as they engage in multi-app tasks. As part of our work we collected data from the smartphones of 14 users engaged in real-life multi-app tasks. We found that it is reasonable to group tasks into high-level intentions. Based on the dialog content, IA can generate useful phrases to describe the intention. We also found that, with readily available contexts, IAs can effectively predict user’s intents during conversation, with accuracy at 58.9%.",
"title": ""
},
{
"docid": "f63da8e7659e711bcb7a148ea12a11f2",
"text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.",
"title": ""
},
{
"docid": "51be236c79d1af7a2aff62a8049fba34",
"text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.",
"title": ""
},
{
"docid": "13a64221ff915439d846481050e52108",
"text": "This paper proposes a new maximum power point tracking (MPPT) method for photovoltaic (PV) systems by using Kalman filter. A Perturbation & Observation (P&O) method is widely used presently due to its easy implementation and simplicity. The P&O usually requires of dithering scheme to reduce noise effects, but it slows the tracking response. Tracking speed is the most important factor on improving efficiency in frequent environmental changes. The proposed method is based on the Kalman filter. It shows the fast tracking performance on noisy conditions, so that enables to generate more power in rapid weather changes than the P&O. Simulation results are provided the comparison between the proposed method and P&O on time responses for conditions of sudden system restart and sudden irradiance change.",
"title": ""
},
{
"docid": "013ec46500a6419c371924b98dac7730",
"text": "A four-quadrant CMOS analog multiplier is presented. The device is nominafly biased with +5-V supplies, has identicaf full-scafe single-ended x and y inputs of +4 V, and exhibits less than 0.5 percent Manuscript received March 1, 1987; revised July 18, 1987. The authors are with the Department of Electrical Engineering, Texas A&M University, College Station, TX 77843-3128. IEEE Log Number 8716852. + I–— ————_—",
"title": ""
},
{
"docid": "5c898e311680199f1f369d3c264b2b14",
"text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.",
"title": ""
},
{
"docid": "b9d78a4f1fc6587557057125343675ab",
"text": "We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especially prevalent on the Internet, where the rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis approach constructs property time series of word usage, and then uses statistically sound change point detection algorithms to identify significant linguistic shifts. We consider and analyze three approaches of increasing complexity to generate such linguistic property time series, the culmination of which uses distributional characteristics inferred from word co-occurrences. Using recently proposed deep neural language models, we first train vector representations of words for each time period. Second, we warp the vector spaces into one unified coordinate system. Finally, we construct a distance-based distributional time series for each word to track its linguistic displacement over time.\n We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging using Twitter, a decade of product reviews using a corpus of movie reviews from Amazon, and a century of written books using the Google Book Ngrams. Our analysis reveals interesting patterns of language usage change commensurate with each medium.",
"title": ""
},
{
"docid": "8f53acbe65e2b98efe5b3018c27d28a7",
"text": "Oracle Materialized Views (MVs) are designed for data warehousing and replication. For data warehousing, MVs based on inner/outer equijoins with optional aggregation, can be refreshed on transaction boundaries, on demand, or periodically. Refreshes are optimized for bulk loads and can use a multi-MV scheduler. MVs based on subqueries on remote tables support bidirectional replication. Optimization with MVs includes transparent query rewrite based on costbased selection method. The ability to rewrite a large class of queries based on a small set of MVs is supported by using Dimensions (new Oracle object), losslessness of joins, functional dependency, column equivalence, join derivability, joinback and aggregate rollup.",
"title": ""
},
{
"docid": "867a6923a650bdb1d1ec4f04cda37713",
"text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.",
"title": ""
},
{
"docid": "ba57149e82718bad622df36852906531",
"text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.",
"title": ""
},
{
"docid": "6f77e74cd8667b270fae0ccc673b49a5",
"text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.",
"title": ""
},
{
"docid": "c62fc94fc0fe403f3a416d897b6b9336",
"text": "Nutrigenomics is the application of high-throughput genomics tools in nutrition research. Applied wisely, it will promote an increased understanding of how nutrition influences metabolic pathways and homeostatic control, how this regulation is disturbed in the early phase of a diet-related disease and to what extent individual sensitizing genotypes contribute to such diseases. Ultimately, nutrigenomics will allow effective dietary-intervention strategies to recover normal homeostasis and to prevent diet-related diseases.",
"title": ""
},
{
"docid": "b6ceacf3ad3773acddc3452933b57a0f",
"text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.",
"title": ""
},
{
"docid": "cb8ffb03187583308eb8409d75a54172",
"text": "Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "eb8e210fe9704a23157baffd36f1bdbb",
"text": "This paper describes recent work on the DynDial project ∗ towards incremental semantic interpretation in dialogue. We outline our domain-general gramm r-based approach, using a variant of Dynamic Syntax integrated with Type Theory with Records and Davidsonian event-based semantics. We describe a Java-based implementation of the parser , u d within the Jindigo framework to produce an incremental dialogue system capable of handling inherently incremental phenomena such as split utterances, adjuncts, and mid-sentence clarificat ion requests or backchannels.",
"title": ""
},
{
"docid": "5b984d57ad0940838b703eadd7c733b3",
"text": "Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α→ 0 and RL to α→ 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.",
"title": ""
},
{
"docid": "b8087b15edb4be5771aef83b1b18f723",
"text": "The success of visual telecommunication systems depends on their ability to transmit and display users' natural nonverbal behavior. While video-mediated communication (VMC) is the most widely used form of interpersonal remote interaction, avatar-mediated communication (AMC) in shared virtual environments is increasingly common. This paper presents two experiments investigating eye tracking in AMC. The first experiment compares the degree of social presence experienced in AMC and VMC during truthful and deceptive discourse. Eye tracking data (gaze, blinking, and pupil size) demonstrates that oculesic behavior is similar in both mediation types, and uncovers systematic differences between truth telling and lying. Subjective measures show users' psychological arousal to be greater in VMC than AMC. The second experiment demonstrates that observers of AMC can more accurately detect truth and deception when viewing avatars with added oculesic behavior driven by eye tracking. We discuss implications for the design of future visual telecommunication media interfaces.",
"title": ""
},
{
"docid": "d4896aa12be18aea9a6639422ee12d92",
"text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.",
"title": ""
}
] |
scidocsrr
|
a739180950471aa7d7261ab1a8b9800f
|
Example-dependent cost-sensitive decision trees
|
[
{
"docid": "b4bc5ccbe0929261856d18272c47a3de",
"text": "ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen.",
"title": ""
},
{
"docid": "dbf5d0f6ce7161f55cf346e46150e8d7",
"text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9698e55abb8cee0f3a5663517bd0037",
"text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: Nicolas.Glady@econ.kuleuven.ac.b The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "41d9e95f3a761064a57da051e809dc44",
"text": "The behaviour of a driven double well Duffing-van der Pol (DVP) oscillator for a specific parametric choice (| α |= β) is studied. The existence of different attractors in the system parameters (f − ω) domain is examined and a detailed account of various steady states for fixed damping is presented. Transition from quasiperiodic to periodic motion through chaotic oscillations is reported. The intervening chaotic regime is further shown to possess islands of phase-locked states and periodic windows (including period doubling regions), boundary crisis, all the three classes of intermittencies, and transient chaos. We also observe the existence of local-global bifurcation of intermittent catastrophe type and global bifurcation of blue-sky catastrophe type during transition from quasiperiodic to periodic solutions. Using a perturbative periodic solution, an investigation of the various forms of instablities allows one to predict Neimark instablity in the (f − ω) plane and eventually results in the approximate predictive criteria for the chaotic region.",
"title": ""
},
{
"docid": "8eb2a660107b304caf574bdf7fad3f23",
"text": "To enhance torque density by harmonic current injection, optimal slot/pole combinations for five-phase permanent magnet synchronous motors (PMSM) with fractional-slot concentrated windings (FSCW) are chosen. The synchronous and the third harmonic winding factors are calculated for a series of slot/pole combinations. Two five-phase PMSM, with general FSCW (GFSCW) and modular stator and FSCW (MFSCW), are analyzed and compared in detail, including the stator structures, star of slots diagrams, and MMF harmonic analysis based on the winding function theory. The analytical results are verified by finite element method, the torque characteristics and phase back-EMF are also taken into considerations. Results show that the MFSCW PMSM can produce higher average torque, while characterized by more MMF harmonic contents and larger ripple torque.",
"title": ""
},
{
"docid": "bf71f7f57def7633a5390b572e983bc9",
"text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.",
"title": ""
},
{
"docid": "65de85b6befbcb8cb66ceb4e4346d3a9",
"text": "BACKGROUND\nClinical observations have suggested that hippotherapy may be an effective strategy for habilitating balance deficits in children with movement disorders. However, there is limited research to support this notion.\n\n\nOBJECTIVE\nThe purposes of this study were to assess the effectiveness of hippotherapy for the management of postural instability in children with mild to moderate balance problems and to determine whether there is a correlation between balance and function.\n\n\nDESIGN\nA repeated-measures design for a cohort of children with documented balance deficits was used.\n\n\nMETHODS\nSixteen children (9 boys and 7 girls) who were 5 to 16 years of age and had documented balance problems participated in this study. Intervention consisted of 45-minute hippotherapy sessions twice per week for 6 weeks. Two baseline assessments and 1 postintervention assessment of balance, as measured with the Pediatric Balance Scale (PBS), and of function, as measured with the Activities Scale for Kids-Performance (ASKp), were performed.\n\n\nRESULTS\nWith the Friedman analysis of variance, the PBS and the ASKp were found to be statistically significant across all measurements (P<.0001 for both measures). Post hoc analysis revealed a statistical difference between baseline and postintervention measures (P≤.017). This degree of difference resulted in large effect sizes for PBS (d=1.59) and ASKp (d=1.51) scores after hippotherapy. A Spearman rho correlation of .700 indicated a statistical association between PBS and ASKp postintervention scores (P=.003). There was no correlation between the change in PBS scores and the change in ASKp scores (r(s)=.13, P>.05).\n\n\nLIMITATIONS\nLack of a control group and the short duration between baseline assessments are study limitations.\n\n\nCONCLUSIONS\nThe findings suggest that hippotherapy may be a viable strategy for reducing balance deficits and improving the performance of daily life skills in children with mild to moderate balance problems.",
"title": ""
},
{
"docid": "ecce348941aeda57bd66dbd7836923e6",
"text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.",
"title": ""
},
{
"docid": "1f3d84321cc2843349c5b6ef43fc8b9a",
"text": "It has long been posited that among emotional stimuli, only negative threatening information modulates early shifts of attention. However, in the last few decades there has been an increase in research showing that attention is also involuntarily oriented toward positive rewarding stimuli such as babies, food, and erotic information. Because reproduction-related stimuli have some of the largest effects among positive stimuli on emotional attention, the present work reviews recent literature and proposes that the cognitive and cerebral mechanisms underlying the involuntarily attentional orientation toward threat-related information are also sensitive to erotic information. More specifically, the recent research suggests that both types of information involuntarily orient attention due to their concern relevance and that the amygdala plays an important role in detecting concern-relevant stimuli, thereby enhancing perceptual processing and influencing emotional attentional processes.",
"title": ""
},
{
"docid": "58f6247a0958bf0087620921c99103b1",
"text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.",
"title": ""
},
{
"docid": "6377b90960aaaf2e815339a3315d72cd",
"text": "Coronary artery disease (CAD) is one of the most common causes of death worldwide. In the last decade, significant advancements in CAD treatment have been made. The existing treatment is medical, surgical or a combination of both depending on the extent, severity and clinical presentation of CAD. The collaboration between different science disciplines such as biotechnology and tissue engineering has led to the development of novel therapeutic strategies such as stem cells, nanotechnology, robotic surgery and other advancements (3-D printing and drugs). These treatment modalities show promising effects in managing CAD and associated conditions. Research on stem cells focuses on studying the potential for cardiac regeneration, while nanotechnology research investigates nano-drug delivery and percutaneous coronary interventions including stent modifications and coatings. This article aims to provide an update on the literature (in vitro, translational, animal and clinical) related to these novel strategies and to elucidate the rationale behind their potential treatment of CAD. Through the extensive and continued efforts of researchers and clinicians worldwide, these novel strategies hold the promise to be effective alternatives to existing treatment modalities.",
"title": ""
},
{
"docid": "ddffafc22209fc71c6c572dea0ddfca4",
"text": "In the context of an ongoing digital transformation, companies across all industries are confronted with the challenge to exploit IT-induced business opportunities and to simultaneously avert IT-induced business risks. Due to this development, questions about a company’s overall status with regard to its digital transformation become more and more relevant. In recent years, an unclear number of maturity models was established in order to address these kind of questions by assessing a company’s digital maturity. Purpose of this Report is to show the large range of digital maturity models and to evaluate overall potential for approximating a company’s digital transformation status.",
"title": ""
},
{
"docid": "a27660db1d7d2a6724ce5fd8991479f7",
"text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.",
"title": ""
},
{
"docid": "002572cf1381257e47f74fc2de9bdc83",
"text": "As information technology becomes integral to the products and services in a growing range of industries, there has been a corresponding surge of interest in understanding how firms can effectively formulate and execute digital business strategies. This fusion of IT within the business environment gives rise to a strategic tension between investing in digital artifacts for long-term value creation and exploiting them for short-term value appropriation. Further, relentless innovation and competitive pressures dictate that firms continually adapt these artifacts to changing market and technological conditions, but sustained profitability requires scalable architectures that can serve a large customer base and stable interfaces that support integration across a diverse ecosystem of complementary offerings. The study of digital business strategy needs new concepts and methods to examine how these forces are managed in pursuit of competitive advantage. We conceptualize the logic of digital business strategy in terms of two constructs: design capital (i.e., the cumulative stock of designs owned or controlled by a firm), and design moves (i.e., the discrete strategic actions that enlarge, reduce, or modify a firm’s stock of designs). We also identify two salient dimensions of design capital, namely option value and technical debt. Using embedded case studies of four firms, we develop a rich conceptual model and testable propositions to lay out a design-based logic of digital business strategy. This logic highlights the interplay between design moves and design capital in the context of digital business strategy and contributes to a growing body of insights that link the design of digital artifacts to competitive strategy and firm-level performance.",
"title": ""
},
{
"docid": "32172b93cb6050c4a93b8323a56ad6b4",
"text": "This work presents a novel method for automatic detection and identification of heart sounds. Homomorphic filtering is used to obtain a smooth envelogram of the phono cardiogram, which enables a robust detection of events of interest in heart sound signal. Sequences of features extracted from the detected events are used as observations of a hidden Markov model. It is demonstrated that the task of detection and identification of the major heart sounds can be learned from unlabelled phono cardiograms by an unsupervised training process and without the assistance of any additional synchronizing channels",
"title": ""
},
{
"docid": "68e4c1122a2339a89cb3873e1013a26e",
"text": "Although there is a voluminous literature on mass media effects on body image concerns of young adult women in the U.S., there has been relatively little theoretically-driven research on processes and effects of social media on young women’s body image and self-perceptions. Yet given the heavy online presence of young adults, particularly women, and their reliance on social media, it is important to appreciate ways that social media can influence perceptions of body image and body image disturbance. Drawing on communication and social psychological theories, the present article articulates a series of ideas and a framework to guide research on social media effects on body image concerns of young adult women. The interactive format and content features of social media, such as the strong peer presence and exchange of a multitude of visual images, suggest that social media, working via negative social comparisons, transportation, and peer normative processes, can significantly influence body image concerns. A model is proposed that emphasizes the impact of predisposing individual vulnerability characteristics, social media uses, and mediating psychological processes on body dissatisfaction and eating disorders. Research-based ideas about social media effects on male body image, intersections with ethnicity, and ameliorative strategies are also discussed.",
"title": ""
},
{
"docid": "1f6f4025fa450b845cefe5da2b842031",
"text": "The Carnegie Mellon In Silico Vox project seeks to move best-quality speech recognition technology from its current software-only form into a range of efficient all-hardware implementations. The central thesis is that, like graphics chips, the application is simply too performance hungry, and too power sensitive, to stay as a large software application. As a first step in this direction, we describe the design and implementation of a fully functional speech-to-text recognizer on a single Xilinx XUP platform. The design recognizes a 1000 word vocabulary, is speaker-independent, recognizes continuous (connected) speech, and is a \"live mode\" engine, wherein recognition can start as soon as speech input appears. To the best of our knowledge, this is the most complex recognizer architecture ever fully committed to a hardware-only form. The implementation is extraordinarily small, and achieves the same accuracy as state-of-the-art software recognizers, while running at a fraction of the clock speed.",
"title": ""
},
{
"docid": "5d9106a06f606cefb3b24fb14c72d41a",
"text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.",
"title": ""
},
{
"docid": "eec33c75a0ec9b055a857054d05bcf54",
"text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.",
"title": ""
},
{
"docid": "870ac1e223cc937e5f4416c9b2ee4a89",
"text": "Effective weed control, using either mechanical or chemical means, relies on knowledge of the crop and weed plant occurrences in the field. This knowledge can be obtained automatically by analyzing images collected in the field. Many existing methods for plant detection in images make the assumption that plant foliage does not overlap. This assumption is often violated, reducing the performance of existing methods. This study overcomes this issue by training a convolutional neural network to create a pixel-wise classification of crops, weeds and soil in RGB images from fields, in order to know the exact position of the plants. This training is based on simulated top-down images of weeds and maize in fields. The results show an pixel accuracy over 94% and a 100% detection rate of both maize and weeds, when tested on real images, while a high intersection over union is kept. The system can handle 2.4 images per second for images with a resolution of 1MPix, when using an Nvidia Titan X GPU.",
"title": ""
},
{
"docid": "c04065ff9cbeba50c0d70e30ab2e8b53",
"text": "A linear model is suggested for the influence of covariates on the intensity function. This approach is less vulnerable than the Cox model to problems of inconsistency when covariates are deleted or the precision of covariate measurements is changed. A method of non-parametric estimation of regression functions is presented. This results in plots that may give information on the change over time in the influence of covariates. A test method and two goodness of fit plots are also given. The approach is illustrated by simulation as well as by data from a clinical trial of treatment of carcinoma of the oropharynx.",
"title": ""
},
{
"docid": "e29d3ab3d3b9bd6cbff1c2a79a6c3070",
"text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.",
"title": ""
},
{
"docid": "eae5713c086986c4ef346d85ce06bf3d",
"text": "We describe a study designed to assess properties of a P300 brain-computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3 x 3 or 6 x 6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350 ms). Online accuracy was highest for the 3 x 3 matrix 175-ms ISI condition, while bit rate was highest for the 6 x 6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6 x 6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication.",
"title": ""
}
] |
scidocsrr
|
d5101672c57631d725493d0793715aee
|
Universal Tuning System for Series-Resonant Induction Heating Applications
|
[
{
"docid": "5cd9031a58457c0cb5fb2d49f1da40f6",
"text": "Induction heating (IH) technology is nowadays the heating technology of choice in many industrial, domestic, and medical applications due to its advantages regarding efficiency, fast heating, safety, cleanness, and accurate control. Advances in key technologies, i.e., power electronics, control techniques, and magnetic component design, have allowed the development of highly reliable and cost-effective systems, making this technology readily available and ubiquitous. This paper reviews IH technology summarizing the main milestones in its development and analyzing the current state of art of IH systems in industrial, domestic, and medical applications, paying special attention to the key enabling technologies involved. Finally, an overview of future research trends and challenges is given, highlighting the promising future of IH technology.",
"title": ""
},
{
"docid": "9cd18dd8709ae798c787ec44128bf8cd",
"text": "This paper presents a cascaded coil flux control based on a Current Source Parallel Resonant Push-Pull Inverter (CSPRPI) for Induction Heating (IH) applications. The most important problems associated with current source parallel resonant inverters are start-up problems and the variable response of IH systems under load variations. This paper proposes a simple cascaded control method to increase an IH system’s robustness to load variations. The proposed IH has been analyzed in both the steady state and the transient state. Based on this method, the resonant frequency is tracked using Phase Locked Loop (PLL) circuits using a Multiplier Phase Detector (MPD) to achieve ZVS under the transient condition. A laboratory prototype was built with an operating frequency of 57-59 kHz and a rated power of 300 W. Simulation and experimental results verify the validity of the proposed power control method and the PLL dynamics.",
"title": ""
}
] |
[
{
"docid": "83d42bb6ce4d4bf73f5ab551d0b78000",
"text": "An integrated 19-GHz Colpitts oscillator for a 77-GHz FMCW automotive radar frontend application is presented. The Colpitts oscillator has been realized in a fully differential circuit architecture. The VCO's 19 GHz output signal is buffered with an emitter follower stage and used as a LO signal source for a 77-GHz radar transceiver architecture. The LO frequency is quadrupled and amplified to drive the switching quad of a Gilbert-type mixer. As the quadrupler-mixer chip is required to describe the radar-sensor it is introduced, but the main focus of this paper aims the design of the sensor's LO source. In addition, the VCO-chip provides a divide-by-8 stage. The divider is either used for on-wafer measurements or later on in a PLL application.",
"title": ""
},
{
"docid": "c08e9731b9a1135b7fb52548c5c6f77e",
"text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.",
"title": ""
},
{
"docid": "d6f52736d78a5b860bdb364f64e4523c",
"text": "Deep convolutional neural networks (CNN) have recently been shown to generate promising results for aesthetics assessment. However, the performance of these deep CNN methods is often compromised by the constraint that the neural network only takes the fixed-size input. To accommodate this requirement, input images need to be transformed via cropping, warping, or padding, which often alter image composition, reduce image resolution, or cause image distortion. Thus the aesthetics of the original images is impaired because of potential loss of fine grained details and holistic image layout. However, such fine grained details and holistic image layout is critical for evaluating an images aesthetics. In this paper, we present an Adaptive Layout-Aware Multi-Patch Convolutional Neural Network (A-Lamp CNN) architecture for photo aesthetic assessment. This novel scheme is able to accept arbitrary sized images, and learn from both fined grained details and holistic image layout simultaneously. To enable training on these hybrid inputs, we extend the method by developing a dedicated double-subnet neural network structure, i.e. a Multi-Patch subnet and a Layout-Aware subnet. We further construct an aggregation layer to effectively combine the hybrid features from these two subnets. Extensive experiments on the large-scale aesthetics assessment benchmark (AVA) demonstrate significant performance improvement over the state-of-the-art in photo aesthetic assessment.",
"title": ""
},
{
"docid": "73e2738994b78d54d8fbad5df4622451",
"text": "Although online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs, they introduce a challenge for businesses to analyze them because of their volume, variety, velocity and veracity. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach for big data analytics. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Because the current methods used for sorting OCR may bias both their readership and helpfulness, the approach used in this study can be adopted by online vendors to develop scalable automated systems for sorting and classification of big OCR data which will benefit both vendors and consumers.",
"title": ""
},
{
"docid": "4abae313432bbc338b096275bf3d7816",
"text": "Phase change materials (PCM) take advantage of latent heat that can be stored or released from a material over a narrow temperature range. PCM possesses the ability to change their state with a certain temperature range. These materials absorb energy during the heating process as phase change takes place and release energy to the environment in the phase change range during a reverse cooling process. Insulation effect reached by the PCM depends on temperature and time. Recently, the incorporation of PCM in textiles by coating or encapsulation to make thermo-regulated smart textiles has grown interest to the researcher. Therefore, an attempt has been taken to review the working principle of PCM and their applications for smart temperature regulated textiles. Different types of phase change materials are introduced. This is followed by an account of incorporation of PCM in the textile structure are summarized. Concept of thermal comfort, clothing for cold environment, phase change materials and clothing comfort are discussed in this review paper. Some recent applications of PCM incorporated textiles are stated. Finally, the market of PCM in textiles field and some challenges are mentioned in this review paper. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "290ded425fe91bb0898a0e2fd815d575",
"text": "We introduce the concept of the point cloud database, a new kind of database system aimed primarily towards scientific applications. Many scientific observations, experiments, feature extraction algorithms and large-scale simulations produce enormous amounts of data that are better represented as sparse (but often highly-clustered) points in a k-dimensional (k ≲ 10) metric space than on a multi-dimensional grid. Dimensionality reduction techniques, such as principal components, are also widely-used to project high dimensional data into similarly low dimensional spaces. Analysis techniques developed to work on multi-dimensional data points are usually implemented as in-memory algorithms and need to be modified to work in distributed cluster environments and on large amounts of disk-resident data. We conclude that the relational model, with certain additions, is appropriate for point clouds, but point cloud databases must also provide unique set of spatial search and proximity join operators, indexing schemes, and query language constructs that make them a distinct class of database systems.",
"title": ""
},
{
"docid": "7d00770a64f25b728f149939fd2c1e7c",
"text": "Replicated databases that use quorum-consensus algorithms to perform majority voting are prone to deadlocks. Due to the P-out-of-Q nature of quorum requests, deadlocks that arise are generalized deadlocks and are hard to detect. We present an efficient distributed algorithm to detect generalized deadlocks in replicated databases. The algorithm performs reduction of a distributed waitfor-graph (WFG) to determine the existence of a deadlock. if sufficient information to decide the reducibility of a node is not available at that node, the algorithm attempts reduction later in a lazy manner. We prove the correctness of the algorithm. The algorithm has a message complexity of 2n messages and a worst-case time complexity of 2d + 2 hops, where c is the number of edges and d is the diameter of the WFG. The algorithm is shown to perform significantly better in both time and message complexity than the best known existing algorithms. We conjecture that this is an optimal algorithm, in time and message complexity, to detect generalized deadlocks if no transaction has complete knowledge of the topology of the WFG or the system and the deadlock detection is to be carried out in a distributed manner.",
"title": ""
},
{
"docid": "0e6d764934629e4ecfb85b5d49696b79",
"text": "Traffic in large cities is one of the biggest problems that can lead to excess utilization of fuel by motor vehicles, accidents, and the waste of time of citizens. To have an effective and efficient city management system, it is necessary to intelligently control all the traffic light signals. For this reason, many researchers have tried to present optimal algorithms for traffic signal control. Some common methods exist for the control of traffic light signal, including the preset cycle time controller and vehicle-actuated controller. Results obtained from previous works indicate that these traffic light signal controllers do not exhibit an effective performance at moments of traffic peak. So to resolve this dilemma at such moments, traffic cops are employed. The application of fuzzy logic in traffic signal controllers has been seriously considered for several decades and many research works have been carried out in this regard. The fuzzy signal controllers perform the optimization task by minimizing the waiting time of the vehicles and maximizing the traffic capacity. A new fuzzy logic based algorithm is proposed in this article, which not only can reduce the waiting time and the number of vehicles behind a traffic light and at an intersection, but can consider the traffic situations at adjacent intersections as well. Finally, a comparison is made between the designed fuzzy controller and the preset cycle time controller.",
"title": ""
},
{
"docid": "544c1608c03535121b8274ff51343e38",
"text": "As multilevel models (MLMs) are useful in understanding relationships existent in hierarchical data structures, these models have started to be used more frequently in research developed in social and health sciences. In order to draw meaningful conclusions from MLMs, researchers need to make sure that the model fits the data. Model fit, and thus, ultimately model selection can be assessed by examining changes in several fit indices across nested and/or nonnested models [e.g., -2 log likelihood (-2LL), Akaike Information Criterion (AIC), and Schwarz’s Bayesian Information Criterion (BIC)]. In addition, the difference in pseudo-R 2 is often used to examine the practical significance between two nested models. Considering the importance of using all of these measures when determining model selection, researchers who use analyze multilevel models would benefit from being able to easily assess model fit across estimated models. Whereas SAS PROC MIXED produces the -2LL, AIC, and BIC, it does not provide the actual change in these fit indices or the change in pseudo-R 2 between different nested and non-nested models. In order to make this information more attainable, Bardenheier (2009) developed a macro that allowed researchers using PROC MIXED to obtain the test statistic for the difference in -2LL along with the p-value of the Likelihood Ratio Test (LRT). As an extension of Bardenheier’s work, this paper provides a comprehensive SAS macro that incorporates changes in model fit statistics (-2LL, AIC and BIC) as well as change in pseudo-R 2 . By utilizing data from PROC MIXED ODS tables, the macro produces a comprehensive table of changes in model fit measures. Thus, this expanded macro allows SAS users to examine model fit in both nested and non-nested models and both in terms of statistical and practical significance. This paper provides a review of the different methods used to assess model fit in multilevel analysis, the macro programming language, an executed example of the macro, and a copy of the complete macro.",
"title": ""
},
{
"docid": "7f4701d8c9f651c3a551a91d19fd28d9",
"text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.",
"title": ""
},
{
"docid": "d5284538412222101f084fee2dc1acc4",
"text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.",
"title": ""
},
{
"docid": "fb2b4ebce6a31accb3b5407f24ad64ba",
"text": "The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots at once is a difficult task for a single operator to execute. In this paper we propose a novel approach for utilizing advising automated agents when assisting an operator to better manage a team of multiple robots in complex environments. We introduce the Myopic Advice Optimization (MYAO) Problem and exemplify its implementation using an agent for the Search And Rescue (SAR) task. Our intelligent advising agent was evaluated through extensive field trials, with 44 non-expert human operators and 10 low-cost mobile robots, in simulation and physical deployment, and showed a significant improvement in both team performance and the operator’s satisfaction.",
"title": ""
},
{
"docid": "89a1e532c8efe66a65a60a8635e37593",
"text": "This paper presents an optimization based approach for cooperative multiple UAV attack missions. The objective is to determine the minimum resources required to coordinately attack a target at a given set of directions. We restrict the paths of the munitions to direct Dubins paths to satisfy field of view constraints and to avoid certain undesirable paths. The proposed algorithm derives the feasible positions and headings for each attack angle, and determines intersection regions corresponding to any two attack angles. We pose a set cover problem, the solution of which gives the minimum number of UAVs required to accomplish the mission.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "2a31c9025e78b5a895d6bb64a6df3578",
"text": "Galhardo L, Oliveira RF. Psychological Stress and Welfare in Fish. Annu Rev Biomed Sci 2009;11:1-20. The ability to respond to stress is vital to the survival of any living organism, though sustained reactions can become detrimental to the health and welfare of animals. Stress responses of vertebrates are known through several studies in their physiological, behavioural and psychological components, under acute and chronic contexts. In fish, the physiological and behavioural aspects of stress are considerably well known phenomena and show striking similarities to those of other vertebrates. However, the psychological component is not well known. Some authors deny mental experiences to fish on the basis of their lack of neocortex. Nevertheless, recent studies have shown neuroendocrine, cognitive and emotional processes in fish that are not only equivalent to other vertebrates, but also allow inferring some forms of mental representation. The integration of psychological elements in fish stress physiology is insufficiently studied, but, as discussed in this article, there is already indirect evidence to admit that some form of stimuli appraisal can take place in fish. This fact has profound implications on the regulation of the stress response, as well as on fish welfare and its management. ©by São Paulo State University ISSN 1806-8774",
"title": ""
},
{
"docid": "68e137f9c722f833a7fdbc8032fc58be",
"text": "BACKGROUND\nChronic Obstructive Pulmonary Disease (COPD) has been a leading cause of morbidity and mortality worldwide, over the years. In 1995, the implementation of a respiratory function survey seemed to be an adequate way to draw attention to neglected respiratory symptoms and increase the awareness of spirometry surveys. By 2002 there were new consensual guidelines in place and the awareness that prevalence of COPD depended on the criteria used for airway obstruction definition. The purpose of this study is to revisit the two studies and to turn public some of the data and respective methodologies.\n\n\nMETHODS\nFrom Pneumobil study database of 12,684 subjects, only the individuals with 40+ years old (n = 9.061) were selected. The 2002 study included a randomized representative sample of 1,384 individuals with 35-69 years old.\n\n\nRESULTS\nThe prevalence of COPD was 8.96% in Pneumobil and 5.34% in the 2002 study. In both studies, presence of COPD was greater in males and there was a positive association between presence of COPD and older age groups. Smokers and ex-smokers showed a higher proportion of cases of COPD.\n\n\nCONCLUSIONS\nPrevalence in Portugal is lower than in other European countries. This may be related to lower smokers' prevalence. Globally, the most important risk factors associated with COPD were age over 60 years, male gender and smoking exposure. All aspects and limitations regarding different recruitment methodologies and different criteria for defining COPD cases highlight the need of a standardized method to evaluate COPD prevalence and associated risks factors, whose results can be compared across countries, as it is the case of BOLD project.",
"title": ""
},
{
"docid": "208c855d4ff1f756147d1a019dec99e0",
"text": "When analyzing data, outlying observations cause problems because they may strongly influence the result. Robust statistics aims at detecting the outliers by searching for the model fitted by the majority of the data. We present an overview of several robust methods and outlier detection tools. We discuss robust procedures for univariate, low-dimensional, and high-dimensional data such as estimation of location and scatter, linear regression, principal component analysis, and classification. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 73–79 DOI: 10.1002/widm.2",
"title": ""
},
{
"docid": "0d8f504eb7518f32c8c99e0ee9448389",
"text": "Contemporary MOSFET mathematical models contain many parameters, most of which have little or no meaning to circuit designers. Designers therefore, continue to use obsolete models -such as the MOSFET square law -for circuit design calculations. However, low-voltage, lowpower systems development demands more advanced circuit design techniques. In this paper I present a brief literature review of MOSFET modeling, which has culminated in the development of the Advanced Compact MOSFET model. Next, I discuss the key ideas and equations of the ACM model, a physically based model with few parameters and equations. Additionally, I show that the ACM model can aid designers in small and large signal circuit analysis in three major respects. First, the ACM model is continuous throughout all regions of operation. Second, terms in ACM model equations appear explicitly in equations that specify circuit performance. Third, the ACM model can aid designers in neglecting MOSFET small signal components that have little influence on circuit performance. Lastly, I conclude with a brief discussion of transconductor linearity, and conclude by mentioning some promising areas of research. The Advanced Compact MOSFET Model and its Application to Inversion Coefficient Based Circuit Design Sean T. Nicolson Copyright © 2002 3",
"title": ""
},
{
"docid": "4474a6b36b2da68b9ad2da4c782049e4",
"text": "A novel stochastic adaptation of the recurrent reinforcement learning (RRL) methodology is applied to daily, weekly, and monthly stock index data, and compared to results obtained elsewhere using genetic programming (GP). The data sets used have been a considered a challenging test for algorithmic trading. It is demonstrated that RRL can reliably outperform buy-and-hold for the higher frequency data, in contrast to GP which performed best for monthly data.",
"title": ""
}
] |
scidocsrr
|
25e842c602026aa56d6cc25fb005f9ad
|
Automatic Liver Segmentation Based on Shape Constraints and Deformable Graph Cut in CT Images
|
[
{
"docid": "048f553914e3d7419918f6862a6eacd6",
"text": "Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm, and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels.",
"title": ""
},
{
"docid": "5325778a57d0807e9b149108ea9e57d8",
"text": "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.",
"title": ""
}
] |
[
{
"docid": "d2e19aeb2969991ec18a71c877775c44",
"text": "OBJECTIVES\nTo evaluate persistence and adherence to mirabegron and antimuscarinics in Japan using data from two administrative databases.\n\n\nMETHODS\nThe present retrospective study evaluated insurance claims for employees and dependents aged ≤75 years, and pharmacy claims for outpatients. From October 2012 to September 2014, new users of mirabegron or five individual antimuscarinics indicated for overactive bladder in Japan (fesoterodine, imidafenacin, propiverine, solifenacin and tolterodine) were identified and followed for 1 year. Persistence with mirabegron and antimuscarinics were evaluated using Kaplan-Meier methods. Any associations between baseline characteristics (age, sex and previous medication use) and persistence were explored. Adherence was assessed using the medication possession ratio.\n\n\nRESULTS\nIn total, 3970 and 16 648 patients were included from the insurance and pharmacy claims databases, respectively. Mirabegron treatment was associated with longer median persistence compared with antimuscarinics (insurance claims: 44 [95% confidence intervals 37-56] vs 21 [14-28] to 30 [30-33] days, pharmacy claims: 105 [96-113] vs 62 [56-77] to 84 [77-86] days). The results were consistent when patients were stratified by age, sex and previous medication. Persistence rate at 1 year was higher for mirabegron (insurance claims: 14.0% [11.5-16.8%] vs 5.4% [4.1-7.0%] to 9.1% [5.3-14.2%], pharmacy claims: 25.9% [24.6-27.3%] vs 16.3% [14.0-18.6%] to 21.3% [20.2-22.4%]). Compared with each antimuscarinic, a higher proportion of mirabegron-treated patients had medication possession ratios ≥0.8.\n\n\nCONCLUSIONS\nThis large nationwide Japanese study shows that persistence and adherence are greater with mirabegron compared with five antimuscarinics.",
"title": ""
},
{
"docid": "ac08ee44179751a99db0e95fe3b0ac18",
"text": "In this paper we tackle the problem of generating natural route descriptions on the basis of input obtained from a commercially available way-finding system. Our framework and architecture incorporates the use of general principles drawn from the domain of natural language generation. Through examples we demonstrate that it is possible to bridge the gap between underlying data representations and natural sounding linguistic descriptions. The work presented contributes both to the area of natural language generation and to the improvement of way-finding system interfaces.",
"title": ""
},
{
"docid": "8140838d7ef17b3d6f6c042442de0f73",
"text": "The two vascular systems of our body are the blood and lymphatic vasculature. Our understanding of the cellular and molecular processes controlling the development of the lymphatic vasculature has progressed significantly in the last decade. In mammals, this is a stepwise process that starts in the embryonic veins, where lymphatic EC (LEC) progenitors are initially specified. The differentiation and maturation of these progenitors continues as they bud from the veins to produce scattered primitive lymph sacs, from which most of the lymphatic vasculature is derived. Here, we summarize our current understanding of the key steps leading to the formation of a functional lymphatic vasculature.",
"title": ""
},
{
"docid": "c9ea42872164e65424498c6a5c5e0c6d",
"text": "Inverse problems appear in many applications, such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this paper, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning. First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a P&P property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method’s parameters. We provide a theoretical analysis of the method and empirically demonstrate its competitiveness with task-specific techniques and the P&P approach for image inpainting and deblurring.",
"title": ""
},
{
"docid": "5f8956868216a6c85fadfaba6aed1413",
"text": "Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.",
"title": ""
},
{
"docid": "89d4143e7845d191433882f3fa5aaa26",
"text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation",
"title": ""
},
{
"docid": "e2988860c1e8b4aebd6c288d37d1ca4e",
"text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.",
"title": ""
},
{
"docid": "232b960cc16aa558538858aefd0a7651",
"text": "This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.",
"title": ""
},
{
"docid": "9c28badf1e53e69452c1d7aad2a87fab",
"text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.",
"title": ""
},
{
"docid": "795d4e73b3236a2b968609c39ce8f417",
"text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.",
"title": ""
},
{
"docid": "845d5fa10e3bf779ea68331022592011",
"text": "Remote sensing is one of the tool which is very important for the production of Land use and land cover maps through a process called image classification. For the image classification process to be successfully, several factors should be considered including availability of quality Landsat imagery and secondary data, a precise classification process and user’s experiences and expertise of the procedures. The objective of this research was to classify and map land-use/land-cover of the study area using remote sensing and Geospatial Information System (GIS) techniques. This research includes two sections (1) Landuse/Landcover (LULC) classification and (2) accuracy assessment. In this study supervised classification was performed using Non Parametric Rule. The major LULC classified were agriculture (65.0%), water body (4.0%), and built up areas (18.3%), mixed forest (5.2%), shrubs (7.0%), and Barren/bare land (0.5%). The study had an overall classification accuracy of 81.7% and kappa coefficient (K) of 0.722. The kappa coefficient is rated as substantial and hence the classified image found to be fit for further research. This study present essential source of information whereby planners and decision makers can use to sustainably plan the environment.",
"title": ""
},
{
"docid": "880d6636a2939ee232da5c293f29ae44",
"text": "BACKGROUND\nMicrocannulas with blunt tips for filler injections have recently been developed for use with dermal fillers. Their utility, ease of use, cosmetic outcomes, perceived pain, and satisfaction ratings amongst patients in terms of comfort and aesthetic outcomes when compared to sharp hypodermic needles has not previously been investigated.\n\n\nOBJECTIVE\nTo compare injections of filler with microcannulas versus hypodermic needles in terms of ease of use, amount of filler required to achieve desired aesthetic outcome, perceived pain by patient, adverse events such as bleeding and bruising and to demonstrate the advantages of single-port injection technique with the blunt-tip microcannula.\n\n\nMATERIALS AND METHODS\nNinety-five patients aged 30 to 76 years with a desire to augment facial, décolleté, and hand features were enrolled in the study. Subjects were recruited in a consecutive manner from patients interested in receiving dermal filler augmentation. Each site was cleaned with alcohol before injection. Anesthesia was obtained with a topical anesthesia peel off mask of lidocaine/tetracaine. Cross-linked hyaluronic acid (20 mg to 28 mg per mL) was injected into the mid-dermis. The microcannula or a hypodermic needle was inserted the entire length of the fold, depression or lip and the filler was injected in a linear retrograde fashion. The volume injected was variable, depending on the depth and the extent of the defect. The injecting physician assessed the ease of injection. Subjects used the Visual Analog Scale (0-10) for pain assessment. Clinical efficacy was assessed by the patients and the investigators immediately after injection, and at one and six months after injection using the Global Aesthetic Improvement Scale (GAIS) and digital photography.\n\n\nRESULTS\nOverall, the Global Aesthetic Improvements Scale (GAIS) results were excellent (55%), moderate (35%), and somewhat improved (10%) one month after the procedure, decreasing to 23%, 44%, and 33%, respectively, at the six month evaluation. There was no significant differences in the GAIS score between the microcannula and the hypodermic needle. However, the Visual Analog Scale for pain assessment during the injections was quite different. The pain was described as 3 (mild) for injections with the microcannula, increasing to 6 (moderate) for injections with the hypodermic needle. Bruising and ecchymosis was more marked following use of the hypodermic needle.\n\n\nCONCLUSION\nUsing the blunt-tip microcannula as an alternative to the hypodermic needles has simplified filler injections and produced less bruising, echymosis, and pain with faster recovery.",
"title": ""
},
{
"docid": "36d1cb90c0c94fab646ff90065b40258",
"text": "This paper provides an in-depth view on nanosensor technology and electromagnetic communication among nanosensors. First, the state of the art in nanosensor technology is surveyed from the device perspective, by explaining the details of the architecture and components of individual nanosensors, as well as the existing manufacturing and integration techniques for nanosensor devices. Some interesting applications of wireless nanosensor networks are highlighted to emphasize the need for communication among nanosensor devices. A new network architecture for the interconnection of nanosensor deviceswith existing communicationnetworks is provided. The communication challenges in terms of terahertz channelmodeling, information encoding andprotocols for nanosensor networks are highlighted, defining a roadmap for the development of this new networking",
"title": ""
},
{
"docid": "b419f58b8a89f5451a6e0efd8f6d5e80",
"text": "Knowledge processing systems recently regained attention in the context of big \"knowledge\" processing and cloud platforms. Therefore, the development of such systems with a high software quality has to be ensured. In this paper an approach to contribute to an architectural guideline for developing such systems using the concept of design patterns is shown. The need, as well as current research in this domain is presented. Further, possible design pattern candidates are introduced that have been extracted from literature.",
"title": ""
},
{
"docid": "49388f99a08a41d713b701cf063a71be",
"text": "In this paper, we present the first-of-its-kind machine learning (ML) system, called AI Programmer, that can automatically generate full software programs requiring only minimal human guidance. At its core, AI Programmer uses genetic algorithms (GA) coupled with a tightly constrained programming language that minimizes the overhead of its ML search space. Part of AI Programmer’s novelty stems from (i) its unique system design, including an embedded, hand-crafted interpreter for efficiency and security and (ii) its augmentation of GAs to include instruction-gene randomization bindings and programming language-specific genome construction and elimination techniques. We provide a detailed examination of AI Programmer’s system design, several examples detailing how the system works, and experimental data demonstrating its software generation capabilities and performance using only mainstream CPUs.",
"title": ""
},
{
"docid": "30a0b6c800056408b32e9ed013565ae0",
"text": "This case report presents the successful use of palatal mini-implants for rapid maxillary expansion and mandibular distalization in a skeletal Class III malocclusion. The patient was a 13-year-old girl with the chief complaint of facial asymmetry and a protruded chin. Camouflage orthodontic treatment was chosen, acknowledging the possibility of need for orthognathic surgery after completion of her growth. A bone-borne rapid expander (BBRME) was used to correct the transverse discrepancy and was then used as indirect anchorage for distalization of the lower dentition with Class III elastics. As a result, a Class I occlusion with favorable inclination of the upper teeth was achieved without any adverse effects. The total treatment period was 25 months. Therefore, BBRME can be considered an alternative treatment in skeletal Class III malocclusion.",
"title": ""
},
{
"docid": "d2d8f1079b5bab3f37ec74a9bf3ac018",
"text": "This paper is focused on the design of generalized composite right/left handed (CRLH) transmission lines in a fully planar configuration, that is, without the use of surface-mount components. These artificial lines exhibit multiple, alternating backward and forward-transmission bands, and are therefore useful for the synthesis of multi-band microwave components. Specifically, a quad-band power splitter, a quad-band branch line hybrid coupler and a dual-bandpass filter, all of them based on fourth-order CRLH lines (i.e., lines exhibiting 2 left-handed and 2 right-handed bands alternating), are presented in this paper. The accurate circuit models, including parasitics, of the structures under consideration (based on electrically small planar resonators), as well as the detailed procedure for the synthesis of these lines using such circuit models, are given. It will be shown that satisfactory results in terms of performance and size can be obtained through the proposed approach, fully compatible with planar technology.",
"title": ""
},
{
"docid": "a968a9842bb49f160503b24bff57cdd6",
"text": "This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).",
"title": ""
},
{
"docid": "f7710fb5fad8092b8a7cc490fb50fe4d",
"text": "Speech is one of the most effective ways of communication among humans. Even though audio is the most common way of transmitting speech, very important information can be fou nd in other modalities, such as vision. Vision is particularly us ef l when the acoustic signal is corrupted. Multi-modal speech recog nition however has not yet found wide-spread use, mostly because th e temporal alignment and fusion of the different information sources is challenging. This paper presents an end-to-end audiovisual speech recog nizer (AVSR), based on recurrent neural networks (RNN) with a conn ectionist temporal classification (CTC) [1] loss function. CTcreates sparse “peaky” output activations, and we analyze the diffe rences in the alignments of output targets (phonemes or visemes) be tween audio-only, video-only, and audio-visual feature represe ntations. We present the first such experiments on the large vocabulary IB M ViaVoice database, which outperform previously published ap proaches on phone accuracy in clean and noisy conditions.",
"title": ""
},
{
"docid": "bf5d53e5465dd5e64385bf9204324059",
"text": "A model of core losses, in which the hysteresis coefficients are variable with the frequency and induction (flux density) and the eddy-current and excess loss coefficients are variable only with the induction, is proposed. A procedure for identifying the model coefficients from multifrequency Epstein tests is described, and examples are provided for three typical grades of non-grain-oriented laminated steel suitable for electric motor manufacturing. Over a wide range of frequencies between 20-400 Hz and inductions from 0.05 to 2 T, the new model yielded much lower errors for the specific core losses than conventional models. The applicability of the model for electric machine analysis is also discussed, and examples from an interior permanent-magnet and an induction motor are included.",
"title": ""
}
] |
scidocsrr
|
81414936aa5050eedd06446fa90d18e2
|
Human factors in cybersecurity; examining the link between Internet addiction, impulsivity, attitudes towards cybersecurity, and risky cybersecurity behaviours
|
[
{
"docid": "99ffc7cd601d1c43bbf7e3537632e95c",
"text": "Despite numerous advances in IT security, many computer users are still vulnerable to security-related risks because they do not comply with organizational policies and procedures. In a network setting, individual risk can extend to all networked users. Endpoint security refers to the set of organizational policies, procedures, and practices directed at securing the endpoint of the network connections – the individual end user. As such, the challenges facing IT managers in providing effective endpoint security are unique in that they often rely heavily on end user participation. But vulnerability can be minimized through modification of desktop security programs and increased vigilance on the part of the system administrator or CSO. The cost-prohibitive nature of these measures generally dictates targeting high-risk users on an individual basis. It is therefore important to differentiate between individuals who are most likely to pose a security risk and those who will likely follow most organizational policies and procedures.",
"title": ""
}
] |
[
{
"docid": "e9e7cb42ed686ace9e9785fafd3c72f8",
"text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).",
"title": ""
},
{
"docid": "4bd161b3e91dea05b728a72ade72e106",
"text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: julio.rodriguez@epfl.ch and jrodrigu@physik.uni-bielefeld.de",
"title": ""
},
{
"docid": "a4affb4b3a83573571e1af3009b187f6",
"text": " Existing path following algorithms for graph matching can be viewed as special cases of the numerical continuation method (NCM), and correspond to particular implementation named generic predictor corrector (GPC). The GPC approach succeeds at regular points, but may fail at singular points. Illustration of GPC and the proposed method is shown in Fig. 1. This paper presents a branching path following (BPF) method to exploring potentially better paths at singular points to improve matching performance. Tao Wang , Haibin Ling 1,3, Congyan Lang , Jun Wu 1Meitu HiScene Lab, HiScene Information Technologies, Shanghai, China 2 School of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China 3 Computer & Information Sciences Department, Temple University, Philadelphia 19122, USA Email: twang@bjtu.edu.cn, hbling@temple.edu, cylang@bjtu.edu.cn, wuj@bjtu.edu.cn Branching Path Following for Graph Matching",
"title": ""
},
{
"docid": "766b726231f9d9540deb40183b49a655",
"text": "This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.",
"title": ""
},
{
"docid": "4702fceea318c326856cc2a7ae553e1f",
"text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.",
"title": ""
},
{
"docid": "20dfc70e3563d5aded0cf34000dff907",
"text": "This paper presents development of a quad rotor tail-sitter VTOL UAV (Vertical Takeoff and Landing Unmanned Aerial Vehicle) which is composed of four rotors and a fixed wing. The conventional VTOL UAVs have a drawback in the accuracy of the attitude control in stationary hovering because they were developed based on a fixed-wing aircraft and they used the control surfaces, such as aileron, elevator, and rudder for the attitude control. To overcome such a drawback, we developed a quad rotor tail-sitter VTOL UAV. The quad rotor tail-sitter VTOL UAV realizes high accuracy in the attitude control with four rotors like a quad rotor helicopter and achieves level flight like a fixed-wing airplane. The remarkable characteristic of the developed quad rotor tail-sitter VTOL UAV is that it does not use any control surfaces even in the level flight. This paper shows the design concept of the developed UAV and experimental verification of all flight modes including hovering, transition flight and level flight.",
"title": ""
},
{
"docid": "d1cacda6383211c78f8aa4138f709d5f",
"text": "Sentiment analysis of reviews traditionally ignored the association between the features of the given product domain. The hierarchical relationship between the features of a product and their associated sentiment that influence the polarity of a review is not dealt with very well. In this work, we analyze the influence of the hierarchical relationship between the product attributes and their sentiments on the overall review polarity. ConceptNet is used to automatically create a product specific ontology that depicts the hierarchical relationship between the product attributes. The ontology tree is annotated with feature-specific polarities which are aggregated bottom-up, exploiting the ontological information, to find the overall review polarity. We propose a weakly supervised system that achieves a reasonable performance improvement over the baseline without requiring any tagged training data.",
"title": ""
},
{
"docid": "3e335d336d3c9bce4dbdf24402b8eb17",
"text": "Unlike traditional database management systems which are organized around a single data model, a multi-model database (MMDB) utilizes a single, integrated back-end to support multiple data models, such as document, graph, relational, and key-value. As more and more platforms are proposed to deal with multi-model data, it becomes crucial to establish a benchmark for evaluating the performance and usability of MMDBs. Previous benchmarks, however, are inadequate for such scenario because they lack a comprehensive consideration for multiple models of data. In this paper, we present a benchmark, called UniBench, with the goal of facilitating a holistic and rigorous evaluation of MMDBs. UniBench consists of a mixed data model, a synthetic multi-model data generator, and a set of core workloads. Specifically, the data model simulates an emerging application: Social Commerce, a Web-based application combining E-commerce and social media. The data generator provides diverse data format including JSON, XML, key-value, tabular, and graph. The workloads are comprised of a set of multi-model queries and transactions, aiming to cover essential aspects of multi-model data management. We implemented all workloads on ArangoDB and OrientDB to illustrate the feasibility of our proposed benchmarking system and show the learned lessons through the evaluation of these two multi-model databases. The source code and data of this benchmark can be downloaded at http://udbms.cs.helsinki.fi/bench/.",
"title": ""
},
{
"docid": "375766c4ae473312c73e0487ab57acc8",
"text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.",
"title": ""
},
{
"docid": "565dcf584448f6724a6529c3d2147a68",
"text": "People are fond of taking and sharing photos in their social life, and a large part of it is face images, especially selfies. A lot of researchers are interested in analyzing attractiveness of face images. Benefited from deep neural networks (DNNs) and training data, researchers have been developing deep learning models that can evaluate facial attractiveness of photos. However, recent development on DNNs showed that they could be easily fooled even when they are trained on a large dataset. In this paper, we used two approaches to generate adversarial examples that have high attractiveness scores but low subjective scores for face attractiveness evaluation on DNNs. In the first approach, experimental results using the SCUT-FBP dataset showed that we could increase attractiveness score of 20 test images from 2.67 to 4.99 on average (score range: [1, 5]) without noticeably changing the images. In the second approach, we could generate similar images from noise image with any target attractiveness score. Results show by using this approach, a part of attractiveness information could be manipulated artificially.",
"title": ""
},
{
"docid": "5325672f176fd572f7be68a466538d95",
"text": "The successful execution of location-based and feature-based queries on spatial databases requires the construction of spatial indexes on the spatial attributes. This is not simple when the data is unstructured as is the case when the data is a collection of documents such as news articles, which is the domain of discourse, where the spatial attribute consists of text that can be (but is not required to be) interpreted as the names of locations. In other words, spatial data is specified using text (known as a toponym) instead of geometry, which means that there is some ambiguity involved. The process of identifying and disambiguating references to geographic locations is known as geotagging and involves using a combination of internal document structure and external knowledge, including a document-independent model of the audience's vocabulary of geographic locations, termed its spatial lexicon. In contrast to previous work, a new spatial lexicon model is presented that distinguishes between a global lexicon of locations known to all audiences, and an audience-specific local lexicon. Generic methods for inferring audiences' local lexicons are described. Evaluations of this inference method and the overall geotagging procedure indicate that establishing local lexicons cannot be overlooked, especially given the increasing prevalence of highly local data sources on the Internet, and will enable the construction of more accurate spatial indexes.",
"title": ""
},
{
"docid": "3cbc035529138be1d6f8f66a637584dd",
"text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.",
"title": ""
},
{
"docid": "16dc05092756ca157476b6aeb7705915",
"text": "Model checkers and other nite-state veriication tools allow developers to detect certain kinds of errors automatically. Nevertheless, the transition of this technology from research to practice has been slow. While there are a number of potential causes for reluctance to adopt such formal methods, we believe that a primary cause is that practitioners are unfamiliar with specii-cation processes, notations, and strategies. In a recent paper, we proposed a pattern-based approach to the presentation, codiication and reuse of property specii-cations for nite-state veriication. Since then, we have carried out a survey of available speciications, collecting over 500 examples of property speciications. We found that most are instances of our proposed patterns. Furthermore, we have updated our pattern system to accommodate new patterns and variations of existing patterns encountered in this survey. This paper reports the results of the survey and the current status of our pattern system.",
"title": ""
},
{
"docid": "78ee892fada4ec9ff860072d0d0ecbe3",
"text": "The popularity of FPGAs is rapidly growing due to the unique advantages that they offer. However, their distinctive features also raise new questions concerning the security and communication capabilities of an FPGA-based hardware platform. In this paper, we explore the some of the limits of FPGA side-channel communication. Specifically, we identify a previously unexplored capability that significantly increases both the potential benefits and risks associated with side-channel communication on an FPGA: an in-device receiver. We designed and implemented three new communication mechanisms: speed modulation, timing modulation and pin hijacking. These non-traditional interfacing techniques have the potential to provide reliable communication with an estimated maximum bandwidth of 3.3 bit/sec, 8 Kbits/sec, and 3.4 Mbits/sec, respectively.",
"title": ""
},
{
"docid": "1d8917f5faaed1531fdcd4df06ff0920",
"text": "4G cellular standards are targeting aggressive spectrum reuse (frequency reuse 1) to achieve high system capacity and simplify radio network planning. The increase in system capacity comes at the expense of SINR degradation due to increased intercell interference, which severely impacts cell-edge user capacity and overall system throughput. Advanced interference management schemes are critical for achieving the required cell edge spectral efficiency targets and to provide ubiquity of user experience throughout the network. In this article we compare interference management solutions across the two main 4G standards: IEEE 802.16m (WiMAX) and 3GPP-LTE. Specifically, we address radio resource management schemes for interference mitigation, which include power control and adaptive fractional frequency reuse. Additional topics, such as interference management for multitier cellular deployments, heterogeneous architectures, and smart antenna schemes will be addressed in follow-up papers.",
"title": ""
},
{
"docid": "b65ead6ac95bff543a5ea690caade548",
"text": "Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.",
"title": ""
},
{
"docid": "e517370f733c10190da90c834f0f486a",
"text": "The planning and organization of athletic training have historically been much discussed and debated in the coaching and sports science literature. Various influential periodization theorists have devised, promoted, and substantiated particular training-planning models based on interpretation of the scientific evidence and individual beliefs and experiences. Superficially, these proposed planning models appear to differ substantially. However, at a deeper level, it can be suggested that such models share a deep-rooted cultural heritage underpinned by a common set of historically pervasive planning beliefs and assumptions. A concern with certain of these formative assumptions is that, although no longer scientifically justifiable, their shaping influence remains deeply embedded. In recent years substantial evidence has emerged demonstrating that training responses vary extensively, depending upon multiple underlying factors. Such findings challenge the appropriateness of applying generic methodologies, founded in overly simplistic rule-based decision making, to the planning problems posed by inherently complex biological systems. The purpose of this review is not to suggest a whole-scale rejection of periodization theories but to promote a refined awareness of their various strengths and weaknesses. Eminent periodization theorists-and their variously proposed periodization models-have contributed substantially to the evolution of training-planning practice. However, there is a logical line of reasoning suggesting an urgent need for periodization theories to be realigned with contemporary elite practice and modern scientific conceptual models. In concluding, it is recommended that increased emphasis be placed on the design and implementation of sensitive and responsive training systems that facilitate the guided emergence of customized context-specific training-planning solutions.",
"title": ""
},
{
"docid": "03ce79214eb7e7f269464574b1e5c208",
"text": "Variable draft is shown to be an essential feature for a research and survey SWATH ship large enough for unrestricted service worldwide. An ongoing semisubmerged (variable draft) SWATH can be designed for access to shallow harbors. Speed at transit (shallow) draft can be comparable to monohulls of the same power while assuring equal or better seakeeping characteristics. Seakeeping with the ship at deeper drafts can be superior to an equivalent SWATH that is designed for all operations at a single draft. The lower hulls of the semisubmerged SWATH ship can be devoid of fins. A practical target for interior clear spacing between the lower hulls is about 50 feet. Access to the sea surface for equipment can be provided astern, over the side, or from within a centerwell amidships. One of the lower hulls can be optimized to carry acoustic sounding equipment. A design is presented in this paper for a semisubmerged ship with a trial speed in excess of 15 knots, a scientific mission payload of 300 tons, and accommodations for 50 personnel. 1. SEMISUBMERGED SWATH TECHNOLOGY A single draft for the full range of operating conditions is a comon feature of typical SWATH ship designs. This constant draft characteristic is found in the SWATH ships built by Mitsuil” , most notably the KAIY03, and the SWATH T-AGOS4 which is now under construction for the U.S. Navy. The constant draft design for ships of this size (about 3,500 tons displacement) poses two significant drawbacks. One is that the draft must be at least 25 feet to satisfy seakeeping requirements. This draft is restrictive for access to many harbors that would be useful for research and survey functions. The second is that hull and column (strut) hydrodynamics generally result in the SWATH being a larger ship and having greater power requirements than for an equivalent monohull. The ship size and hull configuration, together with the necessity for a. President, Blue Sea Corporation b. President, Alan C. McClure Associates, Inc. stabilizing fins, usually leads to a higher capital cost than for a rougher riding, but otherwise equivalent, monohull. The distinguishing feature of the semisubmerged SWATH ship is variable draft. Sufficient allowance for ballast transfer is made to enable the ship to vary its draft under all load conditions. The shallowest draft is well within usual harbor limits and gives the lower hulls a slight freeboard. It also permits transit in low to moderate sea conditions using less propulsion power than is needed by a constant draft SWATH. The semisubmerged SWATH gives more design flexibility to provide for deep draft conditions that strike a balance between operating requirements and seakeeping characteristics. Intermediate “storm” drafts can be selected that are a compromise between seakeeping, speed, and upper hull clearance to avoid slamming. A discussion of these and other tradeoffs in semisubmerged SWATH ship design for oceanographic applications is given in a paper by Gaul and McClure’ . A more general discussion of design tradeoffs is given in a later paper6. The semisubmerged SWATH technology gives rise to some notable contrasts with constant draft SWATH ships. For any propulsion power applied, the semisubmerged SWATH has a range of speed that depends on draft. Highest speeds are obtained at minimum (transit) draft. Because the lower hull freeboard is small at transit draft, seakeeping at service speed can be made equal to or better than an equivalent monohull. The ship is designed for maximum speed at transit draft so the lower hull form is more akin to a surface craft than a submarine. This allows use of a nearly rectangular cross section for the lower hulls which provides damping of vertical motion. For moderate speeds at deeper drafts with the highly damped lower hull form, the ship need not be equipped with stabilizing fins. Since maximum speed is achieved with the columns of the water, it is practical (struts) out to use two c. President, Omega Marine Engineering Systems, Inc. d. Joint venture of Blue Sea Corporation and Martran Consultants, Inc. columns, rather than one, on each lower hull. The four column configuration at deep drafts minimizes the variation of ship motion response with change in course relative to surface wave direction. The width of the ship and lack of appendages on the lower hulls increases the utility of a large underside deck opening (moonpool) amidship. The basic Semisubmerged SWATH Research and Survey Ship design has evolved from requirements first stated by the Institute for Geophysics of the University of Texas (UTIG) in 1984. Blue Sea McClure provided the only SWATH configuration in a set of five conceptual designs procured competitively by the University. Woods Hole Oceanographic Institution, on behalf of the University-National Oceanographic Laboratory System, subsequently contracted for a revision of the UTIG design to meet requirements for an oceanographic research ship. The design was further refined to meet requirements posed by the U.S. Navy for an oceanographic research ship. The intent of this paper is to use this generic design to illustrate the main features of semisubmerged SWATH ships.",
"title": ""
},
{
"docid": "f2634c4a479e58cef42ae776390aee91",
"text": "From the Division of General Medicine and Primary Care, Department of Medicine (D.W.B.), and the Department of Surgery (A.A.G.), Brigham and Women’s Hospital; the Center for Applied Medical Information Systems, Partners HealthCare System (D.W.B.); and Harvard Medical School (D.W.B., A.A.G.) — all in Boston. Address reprint requests to Dr. Bates at the Division of General Medicine and Primary Care, Brigham and Women’s Hospital, 75 Francis St., Boston, MA 02115, or at dbates@ partners.org.",
"title": ""
},
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] |
scidocsrr
|
8beddac83b8e402fea1171c9f2825d94
|
TransmiR: a transcription factor–microRNA regulation database
|
[
{
"docid": "b324860905b6d8c4b4a8429d53f2543d",
"text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.",
"title": ""
}
] |
[
{
"docid": "ddb01f456d904151238ecf695483a2f4",
"text": "If there were only one truth, you couldn't paint a hundred canvases on the same theme.",
"title": ""
},
{
"docid": "ae59ef9772ea8f8277a2d91030bd6050",
"text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.",
"title": ""
},
{
"docid": "8a09944155d35b4d1229b0778baf58a4",
"text": "The recent Omnidirectional MediA Format (OMAF) standard specifies delivery of 360° video content. OMAF supports only equirectangular (ERP) and cubemap projections and their region-wise packing with a limitation on video decoding capability to the maximum resolution of 4K (e.g., 4096x2048). Streaming of 4K ERP content allows only a limited viewport resolution, which is lower than the resolution of many current head-mounted displays (HMDs). In order to take the full advantage of those HMDs, this work proposes a specific mixed-resolution packing of 6K (6144x3072) ERP content and its realization in tile-based streaming, while complying with the 4K-decoding constraint and the High Efficiency Video Coding (HEVC) standard. Experimental results indicate that, using Zonal-PSNR test methodology, the proposed layout decreases the streaming bitrate up to 32% in terms of BD-rate, when compared to mixed-quality viewport-adaptive streaming of 4K ERP as an alternative solution.",
"title": ""
},
{
"docid": "0343f1a0be08ff53e148ef2eb22aaf14",
"text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.",
"title": ""
},
{
"docid": "c29a5acf052aed206d7d7a9078e66ff9",
"text": "Argumentation mining aims to automatically detect, classify and structure argumentation in text. Therefore, argumentation mining is an important part of a complete argumentation analyisis, i.e. understanding the content of serial arguments, their linguistic structure, the relationship between the preceding and following arguments, recognizing the underlying conceptual beliefs, and understanding within the comprehensive coherence of the specific topic. We present different methods to aid argumentation mining, starting with plain argumentation detection and moving forward to a more structural analysis of the detected argumentation. Different state-of-the-art techniques on machine learning and context free grammars are applied to solve the challenges of argumentation mining. We also highlight fundamental questions found during our research and analyse different issues for future research on argumentation mining.",
"title": ""
},
{
"docid": "2136c0e78cac259106d5424a2985e5d7",
"text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net",
"title": ""
},
{
"docid": "aec0c79ea90de753a010abfb43dc3f59",
"text": "Style transfer methods have achieved significant success in recent years with the use of convolutional neural networks. However, many of these methods concentrate on artistic style transfer with few constraints on the output image appearance. We address the challenging problem of transferring face texture from a style face image to a content face image in a photorealistic manner without changing the identity of the original content image. Our framework for face texture transfer (FaceTex) augments the prior work of MRF-CNN with a novel facial semantic regularization that incorporates a face prior regularization smoothly suppressing the changes around facial meso-structures (e.g eyes, nose and mouth) and a facial structure loss function which implicitly preserves the facial structure so that face texture can be transferred without changing the original identity. We demonstrate results on face images and compare our approach with recent state-of-the-art methods. Our results demonstrate superior texture transfer because of the ability to maintain the identity of the original face image.",
"title": ""
},
{
"docid": "b2a0755176f20cd8ee2ca19c091d022d",
"text": "Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.",
"title": ""
},
{
"docid": "17d6bcff27325d7142d520fa87fb6a88",
"text": "India is a vast country depicting wide social, cultural and sexual variations. Indian concept of sexuality has evolved over time and has been immensely influenced by various rulers and religions. Indian sexuality is manifested in our attire, behavior, recreation, literature, sculptures, scriptures, religion and sports. It has influenced the way we perceive our health, disease and device remedies for the same. In modern era, with rapid globalization the unique Indian sexuality is getting diffused. The time has come to rediscover ourselves in terms of sexuality to attain individual freedom and to reinvest our energy to social issues related to sexuality.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "d82553a7bf94647aaf60eb36748e567f",
"text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.",
"title": ""
},
{
"docid": "9096a4dac61f8a87da4f5cbfca5899a8",
"text": "OBJECTIVE\nTo evaluate the CT findings of ruptured corpus luteal cysts.\n\n\nMATERIALS AND METHODS\nSix patients with a surgically proven ruptured corpus luteal cyst were included in this series. The prospective CT findings were retrospectively analyzed in terms of the size and shape of the cyst, the thickness and enhancement pattern of its wall, the attenuation of its contents, and peritoneal fluid.\n\n\nRESULTS\nThe mean diameter of the cysts was 2.8 (range, 1.5-4.8) cm; three were round and three were oval. The mean thickness of the cyst wall was 4.7 (range, 1-10) mm; in all six cases it showed strong enhancement, and in three was discontinuous. In five of six cases, the cystic contents showed high attenuation. Peritoneal fluid was present in all cases, and its attenuation was higher, especially around the uterus and adnexa, than that of urine present in the bladder.\n\n\nCONCLUSION\nIn a woman in whom CT reveals the presence of an ovarian cyst with an enhancing rim and highly attenuated contents, as well as highly attenuated peritoneal fluid, a ruptured corpus luteal cyst should be suspected. Other possible evidence of this is focal interruption of the cyst wall and the presence of peritoneal fluid around the adnexa.",
"title": ""
},
{
"docid": "ae57246e37060c8338ad9894a19f1b6b",
"text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.",
"title": ""
},
{
"docid": "b151866647ad5e4cd50279bfdde4984a",
"text": "Li-Fi stands for Light-Fidelity. Li-Fi innovation, which was suggested by Harald Haas, a German physicist, gives conduction of information over brightening through distribution of information via a LED light which changes in force quicker when compared to the vision of human beings which could take after. Wi-Fi is extraordinary for overall remote scope inside structures, while Li-Fi has been perfect for high thickness remote information scope in limited range besides for calming wireless impedance concerns. Smart meters are electronic devices which are used for recording consumption of electrical energy on a regular basis at an interval of an hour or less. In this paper, we motivate the need to learn and understand about the various new technologies like LiFi and its advantages. Further, we will understand the comparison between LiFi and Wi-Fi and learn about the advantages of using LiFi over WiFi. In addition to that we will also learn about the working of smart meters and its communication of the recorded information on a daily basis to the utility for monitoring and billing purposes.",
"title": ""
},
{
"docid": "86a622185eeffc4a7ea96c307aed225a",
"text": "Copyright © 2014 Massachusetts Medical Society. In light of the rapidly shifting landscape regarding the legalization of marijuana for medical and recreational purposes, patients may be more likely to ask physicians about its potential adverse and beneficial effects on health. The popular notion seems to be that marijuana is a harmless pleasure, access to which should not be regulated or considered illegal. Currently, marijuana is the most commonly used “illicit” drug in the United States, with about 12% of people 12 years of age or older reporting use in the past year and particularly high rates of use among young people.1 The most common route of administration is inhalation. The greenish-gray shredded leaves and flowers of the Cannabis sativa plant are smoked (along with stems and seeds) in cigarettes, cigars, pipes, water pipes, or “blunts” (marijuana rolled in the tobacco-leaf wrapper from a cigar). Hashish is a related product created from the resin of marijuana flowers and is usually smoked (by itself or in a mixture with tobacco) but can be ingested orally. Marijuana can also be used to brew tea, and its oil-based extract can be mixed into food products. The regular use of marijuana during adolescence is of particular concern, since use by this age group is associated with an increased likelihood of deleterious consequences2 (Table 1). Although multiple studies have reported detrimental effects, others have not, and the question of whether marijuana is harmful remains the subject of heated debate. Here we review the current state of the science related to the adverse health effects of the recreational use of marijuana, focusing on those areas for which the evidence is strongest.",
"title": ""
},
{
"docid": "4ddd48db66a5951b82d5b7c2d9b8345a",
"text": "In this paper we address the memory demands that come with the processing of 3-dimensional, high-resolution, multi-channeled medical images in deep learning. We exploit memory-efficient backpropagation techniques, to reduce the memory complexity of network training from being linear in the network’s depth, to being roughly constant – permitting us to elongate deep architectures with negligible memory increase. We evaluate our methodology in the paradigm of Image Quality Transfer, whilst noting its potential application to various tasks that use deep learning. We study the impact of depth on accuracy and show that deeper models have more predictive power, which may exploit larger training sets. We obtain substantially better results than the previous state-of-the-art model with a slight memory increase, reducing the rootmean-squared-error by 13%. Our code is publicly available.",
"title": ""
},
{
"docid": "235899b940c658316693d0a481e2d954",
"text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.",
"title": ""
},
{
"docid": "389a8e74f6573bd5e71b7c725ec3a4a7",
"text": "Paucity of large curated hand-labeled training data forms a major bottleneck in the deployment of machine learning models in computer vision and other fields. Recent work (Data Programming) has shown how distant supervision signals in the form of labeling functions can be used to obtain labels for given data in near-constant time. In this work, we present Adversarial Data Programming (ADP), which presents an adversarial methodology to generate data as well as a curated aggregated label, given a set of weak labeling functions. We validated our method on the MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many state-of-the-art models. We conducted extensive experiments to study its usefulness, as well as showed how the proposed ADP framework can be used for transfer learning as well as multi-task learning, where data from two domains are generated simultaneously using the framework along with the label information. Our future work will involve understanding the theoretical implications of this new framework from a game-theoretic perspective, as well as explore the performance of the method on more complex datasets.",
"title": ""
},
{
"docid": "e28f51ea5a09081bd3037a26ca25aebd",
"text": "Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.",
"title": ""
},
{
"docid": "52faf4868f53008eec1f3ea4f39ed3f0",
"text": "Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts.",
"title": ""
}
] |
scidocsrr
|
035b5b19237126eeb0a28beda02691df
|
Exploring the patterns of social behavior in GitHub
|
[
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
},
{
"docid": "bac117da7b07fff75cf039165fc4e57e",
"text": "The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority.",
"title": ""
}
] |
[
{
"docid": "3b4ad43c44d824749da5487b34f31291",
"text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamical activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.",
"title": ""
},
{
"docid": "1e18d34152a15d84993124b1e689714a",
"text": "Objectives\nEconomic, social, technical, and political drivers are fundamentally changing the nature of work and work environments, with profound implications for the field of occupational health. Nevertheless, researchers and practitioners entering the field are largely being trained to assess and control exposures using approaches developed under old models of work and risks.\n\n\nMethods\nA speaker series and symposium were organized to broadly explore current challenges and future directions for the occupational health field. Broad themes identified throughout these discussions are characterized and discussed to highlight important future directions of occupational health.\n\n\nFindings\nDespite the relatively diverse group of presenters and topics addressed, some important cross-cutting themes emerged. Changes in work organization and the resulting insecurity and precarious employment arrangements change the nature of risk to a large fraction of the workforce. Workforce demographics are changing, and economic disparities among working groups are growing. Globalization exacerbates the 'race to the bottom' for cheap labor, poor regulatory oversight, and limited labor rights. Largely, as a result of these phenomena, the historical distinction between work and non-work exposures has become largely artificial and less useful in understanding risks and developing effective public health intervention models. Additional changes related to climate change, governmental and regulatory limitations, and inadequate surveillance systems challenge and frustrate occupational health progress, while new biomedical and information technologies expand the opportunities for understanding and intervening to improve worker health.\n\n\nConclusion\nThe ideas and evidences discussed during this project suggest that occupational health training, professional practice, and research evolve towards a more holistic, public health-oriented model of worker health. This will require engagement with a wide network of stakeholders. Research and training portfolios need to be broadened to better align with the current realities of work and health and to prepare practitioners for the changing array of occupational health challenges.",
"title": ""
},
{
"docid": "a5001e03007f3fd166e15db37dcd3bc7",
"text": "Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models.",
"title": ""
},
{
"docid": "658ad1e8c3b98c1ccbaa5fe69e762246",
"text": "Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features.",
"title": ""
},
{
"docid": "8c4c469a3fee72e93f60fd47ef78d482",
"text": "With the continuously increasing demand of cost effective, broadband wireless access, radio-over-fiber (RoF) starts to gain more and more momentum. Various techniques already exist, using analog (ARoF) or digitized (DRoF) radio signals over fiber; each with their own advantages and disadvantages. By transmitting a sigma delta modulated signal over fiber (SDoF), a similar immunity to impairments as DRoF can be obtained while maintaining the low complexity of ARoF. This letter describes a detailed experimental comparison between ARoF and SDoF that quantifies the improvement in linearity and error vector magnitude (EVM) of SDoF over ARoF. The experiments were carried out using a 16-QAM constellation with a baudrate from 20 to 125 MBd modulated on a central carrier frequency of 1 GHz. The sigma delta modulator runs at 8 or 13.5 Gbps. A high-speed vertical-cavity surface-emitting laser (VCSEL) operating at 850 nm is used to transmit the signal over 200-m multimode fiber. The receiver amplifies the electrical signals and subsequently filters to recover the original RF signal. Compared with ARoF, improvements exceeding 40 dB were measured on the third order intermodulation products when SDoF was employed, the EVM improves between 2.4 and 7.1 dB.",
"title": ""
},
{
"docid": "da8cdee004db530e262a13e21daf4970",
"text": "Arcing between the plasma and the wafer, kit, or target in PVD processes can cause significant wafer damage and foreign material contamination which limits wafer yield. Monitoring the plasma and quickly detecting this arcing phenomena is critical to ensuring that today's PVD processes run optimally and maximize product yield. This is particularly true in 300mm semiconductor manufacturing, where energies used are higher and more product is exposed to the plasma with each wafer run than in similar 200mm semiconductor manufacturing processes.",
"title": ""
},
{
"docid": "d81d4bc4e8d2bfb0db1fd4141bf2191c",
"text": "Anton 2 is a second-generation special-purpose supercomputer for molecular dynamics simulations that achieves significant gains in performance, programmability, and capacity compared to its predecessor, Anton 1. The architecture of Anton 2 is tailored for fine-grained event-driven operation, which improves performance by increasing the overlap of computation with communication, and also allows a wider range of algorithms to run efficiently, enabling many new software-based optimizations. A 512-node Anton 2 machine, currently in operation, is up to ten times faster than Anton 1 with the same number of nodes, greatly expanding the reach of all-atom biomolecular simulations. Anton 2 is the first platform to achieve simulation rates of multiple microseconds of physical time per day for systems with millions of atoms. Demonstrating strong scaling, the machine simulates a standard 23,558-atom benchmark system at a rate of 85 μs/day---180 times faster than any commodity hardware platform or general-purpose supercomputer.",
"title": ""
},
{
"docid": "9167fbdd1fe4d5c17ffeaf50c6fd32b7",
"text": "For many networked games, such as the Defense of the Ancients and StarCraft series, the unofficial leagues created by players themselves greatly enhance user-experience, and extend the success of each game. Understanding the social structure that players of these game s implicitly form helps to create innovative gaming services to the benefit of both players and game operators. But how to extract and analyse the implicit social structure? We address this question by first proposing a formalism consisting of various ways to map interaction to social structure, and apply this to real-world data collected from three different game genres. We analyse the implications of these mappings for in-game and gaming-related services, ranging from network and socially-aware matchmaking of players, to an investigation of social network robustnes against player departure.",
"title": ""
},
{
"docid": "bbb08c98a2265c53ba590e0872e91e1d",
"text": "Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.",
"title": ""
},
{
"docid": "40464db4c2deea0e4c1c3b760745c168",
"text": "It is challenging to effectively check a regular property of a program. This paper presents RGSE, a regular property guided dynamic symbolic execution (DSE) engine, for finding a program path satisfying a regular property as soon as possible. The key idea is to evaluate the candidate branches based on the history and future information, and explore the branches along which the paths are more likely to satisfy the property in priority. We have applied RGSE to 16 real-world open source Java programs, totaling 270K lines of code. Compared with the state-of-the-art, RGSE achieves two orders of magnitude speedups for finding the first target path. RGSE can benefit many research topics of software testing and analysis, such as path-oriented test case generation, typestate bug finding, and performance tuning. The demo video is at: https://youtu.be/7zAhvRIdaUU, and RGSE can be accessed at: http://jrgse.github.io.",
"title": ""
},
{
"docid": "97f153d8139958fd00002e6a2365d965",
"text": "A method is proposed for fused three-dimensional (3-D) shape estimation and visibility analysis of an unknown, markerless, deforming object through a multicamera vision system. Complete shape estimation is defined herein as the process of 3-D reconstruction of a model through fusion of stereo triangulation data and a visual hull. The differing accuracies of both methods rely on the number and placement of the cameras. Stereo triangulation yields a high-density, high-accuracy reconstruction of a surface patch from a small surface area, while a visual hull yields a complete, low-detail volumetric approximation of the object. The resultant complete 3-D model is, then, temporally projected based on the tracked object’s deformation, yielding a robust deformed shape prediction. Visibility and uncertainty analyses, on the projected model, estimate the expected accuracy of reconstruction at the next sampling instant. In contrast to common techniques that rely on a priori known models and identities of static objects, our method is distinct in its direct application to unknown, markerless, deforming objects, where the object model and identity are unknown to the system. Extensive simulations and comparisons, some of which are presented herein, thoroughly demonstrate the proposed method and its benefits over individual reconstruction techniques. © 2016 SPIE and IS&T [DOI: 10.1117/1.JEI.25.4.041009]",
"title": ""
},
{
"docid": "4be57bfa4e510cdf0e8ad833034d7fce",
"text": "Dynamic data flow tracking (DFT) is a technique broadly used in a variety of security applications that, unfortunately, exhibits poor performance, preventing its adoption in production systems. We present ShadowReplica, a new and efficient approach for accelerating DFT and other shadow memory-based analyses, by decoupling analysis from execution and utilizing spare CPU cores to run them in parallel. Our approach enables us to run a heavyweight technique, like dynamic taint analysis (DTA), twice as fast, while concurrently consuming fewer CPU cycles than when applying it in-line. DFT is run in parallel by a second shadow thread that is spawned for each application thread, and the two communicate using a shared data structure. We avoid the problems suffered by previous approaches, by introducing an off-line application analysis phase that utilizes both static and dynamic analysis methodologies to generate optimized code for decoupling execution and implementing DFT, while it also minimizes the amount of information that needs to be communicated between the two threads. Furthermore, we use a lock-free ring buffer structure and an N-way buffering scheme to efficiently exchange data between threads and maintain high cache-hit rates on multi-core CPUs. Our evaluation shows that ShadowReplica is on average ~2.3× faster than in-line DFT (~2.75× slowdown over native execution) when running the SPEC CPU2006 benchmark, while similar speed ups were observed with command-line utilities and popular server software. Astoundingly, ShadowReplica also reduces the CPU cycles used up to 30%.",
"title": ""
},
{
"docid": "0ee09adae30459337f8e7261165df121",
"text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.",
"title": ""
},
{
"docid": "5c0d3c8962d1f18a50162bbf3dcd4658",
"text": "The field of power electronics poses challenging control problems that cannot be treated in a complete manner using traditional modelling and controller design approaches. The main difficulty arises from the hybrid nature of these systems due to the presence of semiconductor switches that induce different modes of operation and operate with a high switching frequency. Since the control techniques traditionally employed in industry feature a significant potential for improving the performance and the controller design, the field of power electronics invites the application of advanced hybrid systems methodologies. The computational power available today and the recent theoretical advances in the control of hybrid systems allow one to tackle these problems in a novel way that improves the performance of the system, and is systematic and implementable. In this paper, this is illustrated by two examples, namely the Direct Torque Control of three-phase induction motors and the optimal control of switch-mode dc-dc converters.",
"title": ""
},
{
"docid": "b27276c9743bdb33c0cb807653588521",
"text": "Most previous neurophysiological studies evoked emotions by presenting visual stimuli. Models of the emotion circuits in the brain have for the most part ignored emotions arising from musical stimuli. To our knowledge, this is the first emotion brain study which examined the influence of visual and musical stimuli on brain processing. Highly arousing pictures of the International Affective Picture System and classical musical excerpts were chosen to evoke the three basic emotions of happiness, sadness and fear. The emotional stimuli modalities were presented for 70 s either alone or combined (congruent) in a counterbalanced and random order. Electroencephalogram (EEG) Alpha-Power-Density, which is inversely related to neural electrical activity, in 30 scalp electrodes from 24 right-handed healthy female subjects, was recorded. In addition, heart rate (HR), skin conductance responses (SCR), respiration, temperature and psychometrical ratings were collected. Results showed that the experienced quality of the presented emotions was most accurate in the combined conditions, intermediate in the picture conditions and lowest in the sound conditions. Furthermore, both the psychometrical ratings and the physiological involvement measurements (SCR, HR, Respiration) were significantly increased in the combined and sound conditions compared to the picture conditions. Finally, repeated measures ANOVA revealed the largest Alpha-Power-Density for the sound conditions, intermediate for the picture conditions, and lowest for the combined conditions, indicating the strongest activation in the combined conditions in a distributed emotion and arousal network comprising frontal, temporal, parietal and occipital neural structures. Summing up, these findings demonstrate that music can markedly enhance the emotional experience evoked by affective pictures.",
"title": ""
},
{
"docid": "c47f251cc62b405be1eb1b105f443466",
"text": "The conceptualization of gender variant populations within studies have consisted of imposed labels and a diversity of individual identities that preclude any attempt at examining the variations found among gender variant populations, while at the same time creating artificial distinctions between groups that may not actually exist. Data were collected from 90 transgender/transsexual people using confidential, self-administered questionnaires. Factors like age of transition, being out to others, and participant's race and class were associated with experiences of transphobic life events. Discrimination can have profound impact on transgender/transsexual people's lives, but different factors can influence one's experience of transphobia. Further studies are needed to examine how transphobia manifests, and how gender characteristics impact people's lives.",
"title": ""
},
{
"docid": "21af4f870f466baa4bdb02b37c4d9656",
"text": "Software maps -- linking rectangular 3D-Treemaps, software system structure, and performance indicators -- are commonly used to support informed decision making in software-engineering processes. A key aspect for this decision making is that software maps provide the structural context required for correct interpretation of these performance indicators. In parallel, source code repositories and collaboration platforms are an integral part of today's software-engineering tool set, but cannot properly incorporate software maps since implementations are only available as stand-alone applications. Hence, software maps are 'disconnected' from the main body of this tool set, rendering their use and provisioning overly complicated, which is one of the main reasons against regular use. We thus present a web-based rendering system for software maps that achieves both fast client-side page load time and interactive frame rates even with large software maps. We significantly reduce page load time by efficiently encoding hierarchy and geometry data for the net transport. Apart from that, appropriate interaction, layouting, and labeling techniques as well as common image enhancements aid evaluation of project-related quality aspects. Metrics provisioning can further be implemented by predefined attribute mappings to simplify communication of project specific quality aspects. The system is integrated into dashboards to demonstrate how our web-based approach makes software maps more accessible to many different stakeholders in software-engineering projects.",
"title": ""
},
{
"docid": "590cf6884af6223ce4e827ba2fe18209",
"text": "1. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. 2. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. 3. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. 4. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. 5. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. 6. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). 7. The wide range of cell types amenable to giga-seal formation is discussed. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). The wide range of cell types amenable to giga-seal formation is discussed.",
"title": ""
},
{
"docid": "17a475b655134aafde0f49db06bec127",
"text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.",
"title": ""
},
{
"docid": "6bd3614d830cbef03c9567bf096e417a",
"text": "Rehabilitation robots start to become an important tool in stroke rehabilitation. Compared to manual arm training, robot-supported training can be more intensive, of longer duration, repetitive and task-oriented. Therefore, these devices have the potential to improve the rehabilitation process in stroke patients. While in the past, most groups have been working with endeffector-based robots, exoskeleton robots become more and more important, mainly because they offer a better guidance of the single human joints, especially during movements with large ranges. Regarding the upper extremities, the shoulder is the most complex human joint and its actuation is, therefore, challenging. This paper deals with shoulder actuation principles for exoskeleton robots. First, a quantitative analysis of the human shoulder movement is presented. Based on that analysis two shoulder actuation principles that provide motion of the center of the glenohumeral joint are presented and evaluated.",
"title": ""
}
] |
scidocsrr
|
dc9b28f89bc3939ec6b55eb4ce11ab84
|
Computer-Based Clinical Decision Support System for Prediction of Heart Diseases Using Naïve Bayes Algorithm
|
[
{
"docid": "30d7f140a5176773611b3c1f8ec4953e",
"text": "The healthcare environment is generally perceived as being ‘information rich’ yet ‘knowledge poor’. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, decision tree and Artificial Neural Network to massive volume of healthcare data. In particular we consider a case study using classification techniques on a medical data set of diabetic patients.",
"title": ""
}
] |
[
{
"docid": "eea9332a263b7e703a60c781766620e5",
"text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.",
"title": ""
},
{
"docid": "29b1aa2ead1e961ddf9ae85e4b53ffa5",
"text": "Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.",
"title": ""
},
{
"docid": "42c2e599dbbb00784e2a6837ebd17ade",
"text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f987cd94d103fb3d4496b7d95b6079f",
"text": "In the world of sign language, and gestures, a lot of research work has been done over the past three decades. This has brought about a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a limited vocabulary. In present scenario, human machine interactive systems facilitate communication between the deaf, and hearing people in real world situations. In order to improve the accuracy of recognition, many researchers have deployed methods such as HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The main purpose of this paper is to analyze these methods and to effectively compare them, which will enable the reader to reach an optimal solution. This creates both, challenges and opportunities for sign language recognition related research. KeywordsSign Language Recognition, Hidden Markov Model, Artificial Neural Network, Kinect Platform, Fuzzy Logic.",
"title": ""
},
{
"docid": "14857144b52dbfb661d6ef4cd2c59b64",
"text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.",
"title": ""
},
{
"docid": "e08e0eea0e3f3735b53f9eb76c155f9c",
"text": "The temporal-difference methods TD(λ) and Sarsa(λ) form a core part of modern reinforcement learning. Their appeal comes from their good performance, low computational cost, and their simple interpretation, given by their forward view. Recently, new versions of these methods were introduced, called true online TD(λ) and true online Sarsa(λ), respectively (van Seijen and Sutton, 2014). Algorithmically, these true online methods only make two small changes to the update rules of the regular methods, and the extra computational cost is negligible in most cases. However, they follow the ideas underlying the forward view much more closely. In particular, they maintain an exact equivalence with the forward view at all times, whereas the traditional versions only approximate it for small step-sizes. We hypothesize that these true online methods not only have better theoretical properties, but also dominate the regular methods empirically. In this article, we put this hypothesis to the test by performing an extensive empirical comparison. Specifically, we compare the performance of true online TD(λ)/Sarsa(λ) with regular TD(λ)/Sarsa(λ) on random MRPs, a real-world myoelectric prosthetic arm, and a domain from the Arcade Learning Environment. We use linear function approximation with tabular, binary, and non-binary features. Our results suggest that the true online methods indeed dominate the regular methods. Across all domains/representations the learning speed of the true online methods are often better, but never worse than that of the regular methods. An additional advantage is that no choice between traces has to be made for the true online methods. We show that new true online temporal-difference methods can be derived by making changes to the real-time forward view and then rewriting the update equations.",
"title": ""
},
{
"docid": "a2688a1169babed7e35a52fa875505d4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "faf9c570aacd161296de180850153078",
"text": "Two problems occur when bundle adjustment (BA) is applied on long image sequences: the large calculation time and the drift (or error accumulation). In recent work, the calculation time is reduced by local BAs applied in an incremental scheme. The drift may be reduced by fusion of GPS and Structure-from-Motion. An existing fusion method is BA minimizing a weighted sum of image and GPS errors. This paper introduces two constrained BAs for fusion, which enforce an upper bound for the reprojection error. These BAs are alternatives to the existing fusion BA, which does not guarantee a small reprojection error and requires a weight as input. Then the three fusion BAs are integrated in an incremental Structure-from-Motion method based on local BA. Lastly, we will compare the fusion results on a long monocular image sequence and a low cost GPS.",
"title": ""
},
{
"docid": "203f34a946e00211ebc6fce8e2a061ed",
"text": "We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.",
"title": ""
},
{
"docid": "c6befaca710e45101b9a12dbc8110a0b",
"text": "The realized strategy contents of information systems (IS) strategizing are a result of both deliberate and emergent patterns of action. In this paper, we focus on emergent patterns of action by studying the formation of strategies that build on local technology-mediated practices. This is done through case study research of the emergence of a sustainability strategy at a European automaker. Studying the practices of four organizational sub-communities, we develop a process perspective of sub-communities’ activity-based production of strategy contents. The process model explains the contextual conditions that make subcommunities initiate SI strategy contents production, the activity-based process of strategy contents production, and the IS strategy outcome. The process model, which draws on Jarzabkowski’s strategy-as-practice lens and Mintzberg’s strategy typology, contributes to the growing IS strategizing literature that examines local practices in IS efforts of strategic importance. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d247f00420b872fb0153a343d2b44dd3",
"text": "Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes. As a result, conventional network embedding techniques cannot work on such HINs. Recently, metapathbased approaches have been proposed to characterize relationships in HINs, but they are ineffective in capturing rich contexts and semantics between nodes for embedding learning, mainly because (1) metapath is a rather strict single path node-node relationship descriptor, which is unable to accommodate variance in relationships, and (2) only a small portion of paths can match the metapath, resulting in sparse context information for embedding learning. In this paper, we advocate a new metagraph concept to capture richer structural contexts and semantics between distant nodes. A metagraph contains multiple paths between nodes, each describing one type of relationships, so the augmentation of multiple metapaths provides an effective way to capture rich contexts and semantic relations between nodes. This greatly boosts the ability of metapath-based embedding techniques in handling very sparse HINs. We propose a new embedding learning algorithm, namely MetaGraph2Vec, which uses metagraph to guide the generation of random walks and to learn latent embeddings of multi-typed HIN nodes. Experimental results show that MetaGraph2Vec is able to outperform the state-of-theart baselines in various heterogeneous network mining tasks such as node classification, node clustering, and similarity search.",
"title": ""
},
{
"docid": "8b85dc461c11f44e27caaa8c8816a49b",
"text": "In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A∗ can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A∗ does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a modelfree online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significatively overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.",
"title": ""
},
{
"docid": "7894b8eae0ceacc92ef2103f0ea8e693",
"text": "In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.",
"title": ""
},
{
"docid": "c388626855099e1e9f8e5f46d4e271fc",
"text": "The literature assumes that Enterprise Resource Planning (ERP) systems are complex tools. Due to this complexity, ERP produce negative impacts on the users’ acceptation. However, few studies have tried to identify the factors that influence the ERP users’ acceptance. This paper’s aim is to focus on decisive factors influencing the ERP users’ acceptance and use. Specifically, the authors have developed a research model based on the Technology Acceptance Model (TAM) for testing the influence of the Critical Success Factors (CSFs) on ERP implementation. The CSFs used are: (1) top management support, (2) communication, (3) cooperation, (4) training and (5) technological complexity. This research model has offered some evidence about main acceptance factors on ERP which help to set the users’ behavior toward ERP. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "df4477952bc78f9ddca6a637b0d9b990",
"text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.",
"title": ""
},
{
"docid": "c4f851911ed4bc21d666cce45d5595eb",
"text": "! ABSTRACT Purpose The lack of a security evaluation method might expose organizations to several risky situations. This paper aims at presenting a cyclical evaluation model of information security maturity. Design/methodology/approach This model was developed through the definition of a set of steps to be followed in order to obtain periodical evaluation of maturity and continuous improvement of controls. Findings – This model is based on controls present in ISO/IEC 27002, provides a means to measure the current situation of information security management through the use of a maturity model and provides a subsidy to take appropriate and feasible improvement actions, based on risks. A case study is performed and the results indicate that the method is efficient for evaluating the current state of information security, to support information security management, risks identification and business and internal control processes. Research limitations/implications It is possible that modifications to the process may be needed where there is less understanding of security requirements, such as in a less mature organization. Originality/value This paper presents a generic model applicable to all kinds of organizations. The main contribution of this paper is the use of a maturity scale allied to the cyclical process of evaluation, providing the generation of immediate indicators for the management of information security. !",
"title": ""
},
{
"docid": "9a08871e40f477aac7b2e15fcf4ab266",
"text": "Article history: Accepted 10 November 2015 Available online xxxx This paper investigates the role of heterogeneity in the insurance sector. Here, heterogeneity is represented by different types of insurance provided and regions served. Using a balanced panel data set on Brazilian insurance companies as a case study, results corroborate this underlying hypothesis of heterogeneity's impact on performance. The implications of this research for practitioners andacademics are not only addressed in termsofmarket segmentation —which ones are the best performers—but also in terms of mergers and acquisitions—as long as insurance companies may increase their performance with the right balance of types of insurance offered and regions served. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "7332ba6aff8c966d76b1c8f451a02ccf",
"text": "A light-emitting diode (LED) driver compatible with fluorescent lamp (FL) ballasts is presented for a lamp-only replacement without rewiring the existing lamp fixture. Ballasts have a common function to regulate the lamp current, despite widely different circuit topologies. In this paper, magnetic and electronic ballasts are modeled as nonideal current sources and a current-sourced boost converter, which is derived from the duality, is adopted for the power conversion from ballasts. A rectifier circuit with capacitor filaments is proposed to interface the converter with the four-wire output of the ballast. A digital controller emulates the high-voltage discharge of the FL and operates adaptively with various ballasts. A prototype 20-W LED driver for retrofitting T8 36-W FL is evaluated with both magnetic and electronic ballasts. In addition to wide compatibility, accurate regulation of the LED current within 0.6% error and high driver efficiency over 89.7% are obtained.",
"title": ""
}
] |
scidocsrr
|
85466a98cc53eb47040fad30d7570779
|
The multisensory perception of flavor
|
[
{
"docid": "470265e6acd60a190401936fb7121c75",
"text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.",
"title": ""
}
] |
[
{
"docid": "5fc02317117c3068d1409a42b025b018",
"text": "Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.",
"title": ""
},
{
"docid": "74c48ec7adb966fc3024ed87f6102a1a",
"text": "Quantitative accessibility metrics are widely used in accessibility evaluation, which synthesize a summative value to represent the accessibility level of a website. Many of these metrics are the results of a two-step process. The first step is the inspection with regard to potential barriers while different properties are reported, and the second step aggregates these fine-grained reports with varying weights for checkpoints. Existing studies indicate that finding appropriate weights for different checkpoint types is a challenging issue. Although some metrics derive the checkpoint weights from the WCAG priority levels, previous investigations reveal that the correlation between the WCAG priority levels and the user experience is not significant. Moreover, our website accessibility evaluation results also confirm the mismatches between the ranking of websites using existing metrics and the ranking based on user experience. To overcome this limitation, we propose a novel metric called the Web Accessibility Experience Metric (WAEM) that can better match the accessibility evaluation results with the user experience of people with disabilities by aligning the evaluation metric with the partial user experience order (PUEXO), i.e. pairwise comparisons between different websites. A machine learning model is developed to derive the optimal checkpoint weights from the PUEXO. Experiments on real-world web accessibility evaluation data sets validate the effectiveness of WAEM.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "6e30761b695e22a29f98a051dbccac6f",
"text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.",
"title": ""
},
{
"docid": "bb6d1f3618d8a3427f642c3db75ef1ed",
"text": "In this letter, we propose a dual linearly polarized unit cell with 1-bit phase resolution for transmitarray application in X-band. It consists of two-layer metallic patterns connected by a metallized via-hole. One layer of the metallic pattern is a rectangular patch with two p-i-n diodes loaded in O-slot along electric field polarization direction, which is utilized as a receiver-antenna to achieve 1-bit phase tuning. The other metallic pattern is a dual linearly polarized transmitter-antenna that adopts a square ring patch with two p-i-n diodes distributed at the cross-polarization directions. The simulation results show that the designed antenna can achieve 1-bit phase tuning and linearly polarization reconfiguration at 10.5 GHz with insertion loss of about 1.1 dB. The characteristic of the designed transmitarray element is then experimentally validated by an ad-hoc waveguide simulator. The measured results agree with the simulated ones.",
"title": ""
},
{
"docid": "fe142a6a39b17aa0a901cebbd759c003",
"text": "Distant supervision has been widely used in the task of relation extraction (RE). However, when we carefully examine the experimental settings of previous work, we find two issues: (i) The compared models were trained on different training datasets. (ii) The existing testing data contains noise and bias issues. These issues may affect the conclusions in previous work. In this paper, our primary aim is to re-examine the distant supervision-based approaches under the experimental settings without the above issues. We approach this by training models on the same dataset and creating a new testing dataset annotated by the workers on Amzaon Mechanical Turk. We draw new conclusions based on the new testing dataset. The new testing data can be obtained from http://aka.ms/relationie.",
"title": ""
},
{
"docid": "e6a5ce99e55594cd945a57f801bd2d35",
"text": "Cloud Computing is a powerful, flexible, cost efficient platform for providing consumer IT services over the Internet. However Cloud Computing has various levels of risk factors because most important information is outsourced by third party vendors, which means harder to maintain the level of security for data. Steganography is art of hiding information in an image. In this most of the techniques are based on the Least Significant Bit(LSB) bit ,but the hackers easily detect as it embed data sequentially in all pixels .Instead of embedding data sequentially some of the techniques choose randomly. A better approach for this chooses edge pixels for embedding data. So we propose novel technique to hide the data in the Fibonacci edge pixels of an image by extending previous edge based algorithms. This algorithm hides the data in the Fibonacci edge pixels of an image and thus ensures better security against attackers.",
"title": ""
},
{
"docid": "fe06ac2458e00c5447a255486189f1d1",
"text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.",
"title": ""
},
{
"docid": "d2f36cc750703f5bbec2ea3ef4542902",
"text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …",
"title": ""
},
{
"docid": "535b093171db9cfafba4fc91c4254137",
"text": "Millimeter-wave communication is one way to alleviate the spectrum gridlock at lower frequencies while simultaneously providing high-bandwidth communication channels. MmWave makes use of MIMO through large antenna arrays at both the base station and the mobile station to provide sufficient received signal power. This article explains how beamforming and precoding are different in MIMO mmWave systems than in their lower-frequency counterparts, due to different hardware constraints and channel characteristics. Two potential architectures are reviewed: hybrid analog/digital precoding/combining and combining with low-resolution analog- to-digital converters. The potential gains and design challenges for these strategies are discussed, and future research directions are highlighted.",
"title": ""
},
{
"docid": "7d09c7f94dda81e095b80736e229d00e",
"text": "With the constant deepening of research on marine environment simulation and information expression, there are higher and higher requirements for the sense of reality of ocean data visualization results and the real-time interaction in the visualization process. This paper tackle the challenge of key technology of three-dimensional interaction and volume rendering technology based on GPU technology, develops large scale marine hydrological environmental data-oriented visualization software and realizes oceanographic planar graph, contour line rendering, isosurface rendering, factor field volume rendering and dynamic simulation of current field. To express the spatial characteristics and real-time update of massive marine hydrological environmental data better, this study establishes nodes in the scene for the management of geometric objects to realize high-performance dynamic rendering. The system employs CUDA (Computing Unified Device Architecture) parallel computing for the improvement of computation rate, uses NetCDF (Network Common Data Form) file format for data access and applies GPU programming technology to realize fast volume rendering of marine water environmental factors. The visualization software of marine hydrological environment developed can simulate and show properties and change process of marine water environmental factors efficiently and intuitively. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "76262c43c175646d7a00e02a7a49ab81",
"text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.",
"title": ""
},
{
"docid": "7ba33d9f57a6fde047e246d154869ded",
"text": "UNLABELLED\nOrthodontic camouflage in patients with slight or moderate skeletal Class III malocclusions, can be obtained through different treatment alternatives. The purpose of this paper is to present a treatment that has not been described in the literature and which consists of the extraction of lower second molars and distal movement of the posterior segments by means of mandibular cervical headgear (MCH) and fixed appliances as a camouflage alternative. The force applied by the MCH was 250 gr per side (14hr/day). The total treatment time was 1 1/2 years.\n\n\nCONCLUSION\nthe extraction of lower second molars along with the use of mandibular cervical headgear is a good treatment alternative for camouflage in moderate Class III patients in order to obtain good occlusal relationships without affecting facial esthetics or producing marked dental compensations.",
"title": ""
},
{
"docid": "b773df87bf97191a8dd33bd81a7ee2e5",
"text": "We consider the problem of recommending comment-worthy articles such as news and blog-posts. An article is defined to be comment-worthy for a particular user if that user is interested to leave a comment on it. We note that recommending comment-worthy articles calls for elicitation of commenting-interests of the user from the content of both the articles and the past comments made by users. We thus propose to develop content-driven user profiles to elicit these latent interests of users in commenting and use them to recommend articles for future commenting. The difficulty of modeling comment content and the varied nature of users' commenting interests make the problem technically challenging. The problem of recommending comment-worthy articles is resolved by leveraging article and comment content through topic modeling and the co-commenting pattern of users through collaborative filtering, combined within a novel hierarchical Bayesian modeling approach. Our solution, Collaborative Correspondence Topic Models (CCTM), generates user profiles which are leveraged to provide a personalized ranking of comment-worthy articles for each user. Through these content-driven user profiles, CCTM effectively handle the ubiquitous problem of cold-start without relying on additional meta-data. The inference problem for the model is intractable with no off-the-shelf solution and we develop an efficient Monte Carlo EM algorithm. CCTM is evaluated on three real world data-sets, crawled from two blogs, ArsTechnica (AT) Gadgets (102,087 comments) and AT-Science (71,640 comments), and a news site, DailyMail (33,500 comments). We show average improvement of 14% (warm-start) and 18% (cold-start) in AUC, and 80% (warm-start) and 250% (cold-start) in Hit-Rank@5, over state of the art.",
"title": ""
},
{
"docid": "98cef46a572d3886c8a11fa55f5ff83c",
"text": "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2% on UCF101.",
"title": ""
},
{
"docid": "938f8383d25d30b39b6cd9c78d1b3ab5",
"text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.",
"title": ""
},
{
"docid": "2a58426989cbfab0be9e18b7ee272b0a",
"text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.",
"title": ""
},
{
"docid": "8fc05d9e26c0aa98ffafe896d8c5a01b",
"text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute (szhao@wpi.edu). †The author is also affiliated with Northwestern University (kathy.lee@eecs.northwestern.edu). ‡The author is also affiliated with Brandeis University (aprakash@brandeis.edu).",
"title": ""
},
{
"docid": "5a716affe340e69dffef3cc1532f7c33",
"text": "The automated separation of plastic waste fractions intended for mechanical recycling is associated with substantial investments. It is therefore essential to evaluate to what degree separation really brings value to waste plastics as raw materials for new products. The possibility of reducing separation requirements and broadening the range of possible applications for recycled materials through the addition of elastomers, mineral fillers or other additives, should also taken into consideration. Material from a Swedish collection system for rigid (non-film) plastic packaging waste was studied. The non-film polyolefin fraction, which dominated the collected material, consisted of 55% polyethylene (PE) and 45% polypropylene (PP). Mechanical tests for injection-moulded blends of varying composition showed that complete separation of PE and PP is favourable for yield strength, impact strength, tensile energy to break and tensile modulus. Yield strength exhibited a minimum at 80% PE whereas fracture toughness was lowest for blends with 80% PP. The PE fraction, which was dominated by blow-moulded high density polyethylene (HDPE) containers, could be made more suitable for injection-moulding by commingling with the PP fraction. Nucleating agents present in the recycled material were found to influence the microstructure by causing PP to crystallise at a higher temperature than PE in PP-rich blends but not in PE-rich blends. Studies of sheet-extruded multi-component polyolefin mixtures, containing some film plastics, showed that fracture toughness was severely disfavoured if the PE-film component was dominated by low density polyethylene (LDPE) rather than linear low density polyethylene (LLDPE). This trend was reduced when the non-film component was dominated by bottle -grade HDPE. A modifier can be added if it is desired to increase fracture toughness or if there are substantial variations in the composition of the waste-stream. A very low density polyethylene (VLDPE) was found to be a more effective modifier than poly(ethylene-co-vinyl acetate) and poly(1-butene). The addition of 20% VLDPE to multi-component polyolefin mixtures increased the tensile strength and tear propagation resistance by 30% on average, while standard deviations for mechanical properties were reduced by 50%, which would allow product quality to be kept more consistent. ABS was found to be more sensitive to contamination by small amounts of talc-filled PP than viceversa. Contamination levels over 3% of talc -filled PP in ABS gave a very brittle material whereas talcfilled PP retained a ductile behaviour in blends with up to 9% ABS. Compatibility in blends of ABS, high-impact polystyrene and talc -filled PP was poorer at high deformation rates, as opposed to blends of PE and PP from rigid packaging waste where incompatibility was lower at fast deformation. This difference was explained by a higher degree of interfacial interaction through chain entanglements in PE/PP blends.",
"title": ""
},
{
"docid": "8cd62b12b4406db29b289a3e1bd5d05a",
"text": "Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of POS Tagger. We evaluate the performance of our model by human ratings from five English speakers. In terms of the average score, our model outperforms a probabilistic model that puts words into slots in a fixed-structure sentence.",
"title": ""
}
] |
scidocsrr
|
ff87137881321554168d6922bafec025
|
Benchmarking Database Systems A Systematic Approach
|
[
{
"docid": "978b1e9b3a5c4c92f265795a944e575d",
"text": "The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.",
"title": ""
}
] |
[
{
"docid": "e0b85ff6cd78f1640f25215ede3a39e6",
"text": "Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.",
"title": ""
},
{
"docid": "6af138889b6eaeaa6ea8ee4edd7f8aaf",
"text": "University of Leipzig, Natural Language Processing Department, Johannisgasse 26, 04081 Leipzig, Germany robert.remus@googlemail.com, {quasthoff, heyer}@informatik.uni-leipzig.de Abstract SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative sentiment bearing words weighted within the interval of [−1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS (v1.8b) contains 1,650 negative and 1,818 positive words, which sum up to 16,406 positive and 16,328 negative word forms, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. The present work describes the resource’s structure, the three sources utilised to assemble it and the semi-supervised method incorporated to weight the strength of its entries. Furthermore the resource’s contents are extensively evaluated using a German-language evaluation set we constructed. The evaluation set is verified being reliable and its shown that SentiWS provides a beneficial lexical resource for German-language sentiment analysis related tasks to build on.",
"title": ""
},
{
"docid": "77c18ca76341a691b7c0093a88583c82",
"text": "Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area",
"title": ""
},
{
"docid": "a78913db9636369b2d7d8cb5e5a6a351",
"text": "We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.",
"title": ""
},
{
"docid": "f264d5b90dfb774e9ec2ad055c4ebe62",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "f35d164bd1b19f984b10468c41f149e3",
"text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
},
{
"docid": "86497dcdfd05162804091a3368176ad5",
"text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.",
"title": ""
},
{
"docid": "e43242ed17a0b2fa9fca421179135ce1",
"text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",
"title": ""
},
{
"docid": "b8d8785968023a38d742abc15c01ee28",
"text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.",
"title": ""
},
{
"docid": "3a95be7cbc37f20a6c41b84f78013263",
"text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the pediatric intensive care unit (PICU) at Children’s Hospital Los Angeles, our data consists of multivariate time series of observations. The measurements are irregularly spaced, leading to missingness patterns in temporally discretized sequences. While these artifacts are typically handled by imputation, we achieve superior predictive performance by treating the artifacts as features. Unlike linear models, recurrent neural networks can realize this improvement using only simple binary indicators of missingness. For linear models, we show an alternative strategy to capture this signal. Training models on missingness patterns only, we show that for some diseases, what tests are run can as predictive as the results themselves.",
"title": ""
},
{
"docid": "27f773226c458febb313fd48b59c7222",
"text": "This thesis presents extensions to the local binary pattern (LBP) texture analysis operator. The operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. It is made invariant against the rotation of the image domain, and supplemented with a rotation invariant measure of local contrast. The LBP is proposed as a unifying texture model that describes the formation of a texture with micro-textons and their statistical placement rules. The basic LBP is extended to facilitate the analysis of textures with multiple scales by combining neighborhoods with different sizes. The possible instability in sparse sampling is addressed with Gaussian low-pass filtering, which seems to be somewhat helpful. Cellular automata are used as texture features, presumably for the first time ever. With a straightforward inversion algorithm, arbitrarily large binary neighborhoods are encoded with an eight-bit cellular automaton rule, resulting in a very compact multi-scale texture descriptor. The performance of the new operator is shown in an experiment involving textures with multiple spatial scales. An opponent-color version of the LBP is introduced and applied to color textures. Good results are obtained in static illumination conditions. An empirical study with different color and texture measures however shows that color and texture should be treated separately. A number of different applications of the LBP operator are presented, emphasizing real-time issues. A very fast software implementation of the operator is introduced, and different ways of speeding up classification are evaluated. The operator is successfully applied to industrial visual inspection applications and to image retrieval.",
"title": ""
},
{
"docid": "aa13ec272d10ba36ef0d7e530e5dbb39",
"text": "Markov chain Monte Carlo (MCMC) methods are often deemed far too computationally intensive to be of any practical use for large datasets. This paper describes a methodology that aims to scale up the Metropolis-Hastings (MH) algorithm in this context. We propose an approximate implementation of the accept/reject step of MH that only requires evaluating the likelihood of a random subset of the data, yet is guaranteed to coincide with the accept/reject step based on the full dataset with a probability superior to a user-specified tolerance level. This adaptive subsampling technique is an alternative to the recent approach developed in (Korattikara et al., 2014), and it allows us to establish rigorously that the resulting approximate MH algorithm samples from a perturbed version of the target distribution of interest, whose total variation distance to this very target is controlled explicitly. We explore the benefits and limitations of this scheme on several examples.",
"title": ""
},
{
"docid": "d2086d9c52ca9d4779a2e5070f9f3009",
"text": "Though action recognition based on complete videos has achieved great success recently, action prediction remains a challenging task as the information provided by partial videos is not discriminative enough for classifying actions. In this paper, we propose a Deep Residual Feature Learning (DeepRFL) framework to explore more discriminative information from partial videos, achieving similar representations as those of complete videos. The proposed method is based on residual learning, which captures the salient differences between partial videos and their corresponding full videos. The partial videos can attain the missing information by learning from features of complete videos and thus improve the discriminative power. Moreover, our model can be trained efficiently in an end-to-end fashion. Extensive evaluations on the challenging UCF101 and HMDB51 datasets demonstrate that the proposed method outperforms state-of-the-art results.",
"title": ""
},
{
"docid": "512bd1e06d0ce9c920382e1f0843ea33",
"text": "— Diagnosis of the Parkinson disease through machine learning approache provides better understanding from PD dataset in the present decade. Orange v2.0b and weka v3.4.10 has been used in the present experimentation for the statistical analysis, classification, Evaluation and unsupervised learning methods. Voice dataset for Parkinson disease has been retrieved from UCI Machine learning repository from Center for Machine Learning and Intelligent Systems. The dataset contains name, attributes. The parallel coordinates shows higher variation in Parkinson disease dataset. SVM has shown good accuracy (88.9%) compared to Majority and k-NN algorithms. Classification algorithm like Random Forest has shown good accuracy (90.26) and Naïve Bayes has shown least accuracy (69.23. Higher number of clusters in healthy dataset in Fo and less number in diseased data has been predicted by Hierarchal clustering and SOM.",
"title": ""
},
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
},
{
"docid": "491f49dd73578b751f8f3e9afe64341e",
"text": "Multitask learning often improves system performance for morphosyntactic and semantic tagging tasks. However, the question of when and why this is the case has yet to be answered satisfactorily. Although previous work has hypothesised that this is linked to the label distributions of the auxiliary task, we argue that this is not sufficient. We show that information-theoretic measures which consider the joint label distributions of the main and auxiliary tasks offer far more explanatory value. Our findings are empirically supported by experiments for morphosyntactic tasks on 39 languages, and are in line with findings in the literature for several semantic tasks.",
"title": ""
},
{
"docid": "1b7f31c73dd99b6957d8b5c85240b060",
"text": "We propose a novel approach to address the Simultaneous Detection and Segmentation problem introduced in [8]. Using the hierarchical structures first presented in [1] we use an efficient and accurate procedure that exploits the hierarchy feature information using Locality Sensitive Hashing. We build on recent work that utilizes convolutional neural networks to detect bounding boxes in an image (Faster R-CNN [11]) and then use the top similar hierarchical region that best fits each bounding box after hashing, we call this approach HashBox. We then refine our final segmentation results by automatic hierarchy pruning. HashBox introduces a train-free alternative to Hypercolumns [7]. We conduct extensive experiments on Pascal VOC 2012 segmentation dataset, showing that HashBox gives competitive state-of-the-art object segmentations.",
"title": ""
},
{
"docid": "b31676e958e8345132780499e5dd968d",
"text": "Following triggered corporate bankruptcies, an increasing number of prediction models have emerged since 1960s. This study provides a critical analysis of methodologies and empirical findings of applications of these models across 10 different countries. The study’s empirical exercise finds that predictive accuracies of different corporate bankruptcy prediction models are, generally, comparable. Artificially Intelligent Expert System (AIES) models perform marginally better than statistical and theoretical models. Overall, use of Multiple Discriminant Analysis (MDA) dominates the research followed by logit models. Study deduces useful observations and recommendations for future research in this field. JEL classification: G33; C49; C88",
"title": ""
},
{
"docid": "d882657765647d9e84b8ad729a079833",
"text": "Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multiview learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.",
"title": ""
}
] |
scidocsrr
|
77710eebf12562ab763ec52a8fbca309
|
Harassment Detection on Twitter using Conversations
|
[
{
"docid": "84d39e615b8b674cee53741f87a733da",
"text": "Cyber Bullying, which often has a deeply negative impact on the victim, has grown as a serious issue among adolescents. To understand the phenomenon of cyber bullying, experts in social science have focused on personality, social relationships and psychological factors involving both the bully and the victim. Recently computer science researchers have also come up with automated methods to identify cyber bullying messages by identifying bullying-related keywords in cyber conversations. However, the accuracy of these textual feature based methods remains limited. In this work, we investigate whether analyzing social network features can improve the accuracy of cyber bullying detection. By analyzing the social network structure between users and deriving features such as number of friends, network embeddedness, and relationship centrality, we find that the detection of cyber bullying can be significantly improved by integrating the textual features with social network features.",
"title": ""
}
] |
[
{
"docid": "6b410b123925efb0dae519ab8455cc75",
"text": "Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets.",
"title": ""
},
{
"docid": "b816908582329f7959bd6918d9077074",
"text": "Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however, may lead to serious over-fitting and hence miserable performance degradation in adverse acoustic conditions such as those with high ambient noises. We propose a noisy training approach to tackle this problem: by injecting moderate noises into the training data intentionally and randomly, more generalizable DNN models can be learned. This ‘noise injection’ technique, although known to the neural computation community already, has not been studied with DNNs which involve a highly complex objective function. The experiments presented in this paper confirm that the noisy training approach works well for the DNN model and can provide substantial performance improvement for DNN-based speech recognition.",
"title": ""
},
{
"docid": "c2c056ae22c22e2a87b9eca39d125cc2",
"text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.",
"title": ""
},
{
"docid": "38d9a18ba942e401c3d0638f88bc948c",
"text": "The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations.",
"title": ""
},
{
"docid": "9ad145cd939284ed77919b73452236c0",
"text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.",
"title": ""
},
{
"docid": "4c1da8d356e4f793d76f79d4270ecbd0",
"text": "As the proportion of the ageing population in industrialized countries continues to increase, the dermatological concerns of the aged grow in medical importance. Intrinsic structural changes occur as a natural consequence of ageing and are genetically determined. The rate of ageing is significantly different among different populations, as well as among different anatomical sites even within a single individual. The intrinsic rate of skin ageing in any individual can also be dramatically influenced by personal and environmental factors, particularly the amount of exposure to ultraviolet light. Photodamage, which considerably accelerates the visible ageing of skin, also greatly increases the risk of cutaneous neoplasms. As the population ages, dermatological focus must shift from ameliorating the cosmetic consequences of skin ageing to decreasing the genuine morbidity associated with problems of the ageing skin. A better understanding of both the intrinsic and extrinsic influences on the ageing of the skin, as well as distinguishing the retractable aspects of cutaneous ageing (primarily hormonal and lifestyle influences) from the irretractable (primarily intrinsic ageing), is crucial to this endeavour.",
"title": ""
},
{
"docid": "b3d8c827ac58e5e385179275a2c73b31",
"text": "It is the purpose of this article to identify and review criteria that rehabilitation technology should meet in order to offer arm-hand training to stroke patients, based on recent principles of motor learning. A literature search was conducted in PubMed, MEDLINE, CINAHL, and EMBASE (1997–2007). One hundred and eighty seven scientific papers/book references were identified as being relevant. Rehabilitation approaches for upper limb training after stroke show to have shifted in the last decade from being analytical towards being focussed on environmentally contextual skill training (task-oriented training). Training programmes for enhancing motor skills use patient and goal-tailored exercise schedules and individual feedback on exercise performance. Therapist criteria for upper limb rehabilitation technology are suggested which are used to evaluate the strengths and weaknesses of a number of current technological systems. This review shows that technology for supporting upper limb training after stroke needs to align with the evolution in rehabilitation training approaches of the last decade. A major challenge for related technological developments is to provide engaging patient-tailored task oriented arm-hand training in natural environments with patient-tailored feedback to support (re) learning of motor skills.",
"title": ""
},
{
"docid": "9c24c2372ffd9526ee5c80c69685d01f",
"text": "This work explores the use of tow steered composite laminates, functionally graded metals (FGM), thickness distributions, and curvilinear rib/spar/stringer topologies for aeroelastic tailoring. Parameterized models of the Common Research Model (CRM) wing box have been developed for passive aeroelastic tailoring trade studies. Metrics of interest include the wing weight, the onset of dynamic flutter, and the static aeroelastic stresses. Compared to a baseline structure, the lowest aggregate static wing stresses could be obtained with tow steered skins (47% improvement), and many of these designs could reduce weight as well (up to 14%). For these structures, the trade-off between flutter speed and weight is generally strong, although one case showed both a 100% flutter improvement and a 3.5% weight reduction. Material grading showed no benefit in the skins, but moderate flutter speed improvements (with no weight or stress increase) could be obtained by grading the spars (4.8%) or ribs (3.2%), where the best flutter results were obtained by grading both thickness and material. For the topology work, large weight reductions were obtained by removing an inner spar, and performance was maintained by shifting stringers forward and/or using curvilinear ribs: 5.6% weight reduction, a 13.9% improvement in flutter speed, but a 3.0% increase in stress levels. Flutter resistance was also maintained using straightrotated ribs although the design had a 4.2% lower flutter speed than the curved ribs of similar weight and stress levels were higher. These results will guide the development of a future design optimization scheme established to exploit and combine the individual attributes of these technologies.",
"title": ""
},
{
"docid": "09fe7cffb7871977c1cd383396c44262",
"text": "We are interested in the automatic interpretation of how-to instructions, such as cooking recipes, into semantic representations that can facilitate sophisticated question answering. Recent work has shown impressive results on semantic parsing of instructions with minimal supervision, but such techniques cannot handle much of the situated and ambiguous language used in instructions found on the web. In this paper, we suggest how to extend such methods using a model of pragmatics, based on a rich representation of world state.",
"title": ""
},
{
"docid": "f463ee2dd3a9243ed7536d88d8c2c568",
"text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.",
"title": ""
},
{
"docid": "1b9bcb2ab5bc0b2b2e475066a1f78fbe",
"text": "Fragility curves are becoming increasingly common components of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are related to more familiar reliability concepts, such as the deterministic factor of safety and the relative reliability index. Examples of fragility curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in practice. Four basic approaches are identified: judgmental, empirical, hybrid, and analytical. Analytical approaches are, by far, the most common method encountered in the literature. This group of methods is further decomposed based on whether the limit state equation is an explicit function or an implicit function and on whether the probability of failure is obtained using analytical solution methods or numerical solution methods. Advantages and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products. All product names and trademarks cited are the property of their respective owners. The findings of this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR.",
"title": ""
},
{
"docid": "c2055f8366e983b45d8607c877126797",
"text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.",
"title": ""
},
{
"docid": "46d239e66c1de735f80312d8458b131d",
"text": "Cloud computing is a dynamic, scalable and payper-use distributed computing model empowering designers to convey applications amid job designation and storage distribution. Cloud computing encourages to impart a pool of virtualized computer resource empowering designers to convey applications amid job designation and storage distribution. The cloud computing mainly aims to give proficient access to remote and geographically distributed resources. As cloud technology is evolving day by day and confronts numerous challenges, one of them being uncovered is scheduling. Scheduling is basically a set of constructs constructed to have a controlling hand over the order of work to be performed by a computer system. Algorithms are vital to schedule the jobs for execution. Job scheduling algorithms is one of the most challenging hypothetical problems in the cloud computing domain area. Numerous deep investigations have been carried out in the domain of job scheduling of cloud computing. This paper intends to present the performance comparison analysis of various pre-existing job scheduling algorithms considering various parameters. This paper discusses about cloud computing and its constructs in section (i). In section (ii) job scheduling concept in cloud computing has been elaborated. In section (iii) existing algorithms for job scheduling are discussed, and are compared in a tabulated form with respect to various parameters and lastly section (iv) concludes the paper giving brief summary of the work.",
"title": ""
},
{
"docid": "2d3adb98f6b1b4e161d84314958960e5",
"text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.",
"title": ""
},
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "bf0531b03cc36a69aca1956b21243dc6",
"text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …",
"title": ""
},
{
"docid": "c4171bd7b870d26e0b2520fc262e7c88",
"text": "Each year, the treatment decisions for more than 230, 000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100×100 pixels in gigapixel microscopy images sized 100, 000×100, 000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.",
"title": ""
},
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
},
{
"docid": "32dbbc1b9cc78f2a4db0cffd12cd2467",
"text": "OBJECTIVE\nTo evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.\n\n\nDESIGN AND MEASUREMENTS\nThe authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.\n\n\nRESULTS\nNuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.\n\n\nCONCLUSION\nWithout modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems.",
"title": ""
},
{
"docid": "e2b653e1d4faf4067cd791a58f48c9fa",
"text": "Direct visualization of plant tissues by matrix assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) has revealed key insights into the localization of metabolites in situ. Recent efforts have determined the spatial distribution of primary and secondary metabolites in plant tissues and cells. Strategies have been applied in many areas of metabolism including isotope flux analyses, plant interactions, and transcriptional regulation of metabolite accumulation. Technological advances have pushed achievable spatial resolution to subcellular levels and increased instrument sensitivity by several orders of magnitude. It is anticipated that MALDI-MSI and other MSI approaches will bring a new level of understanding to metabolomics as scientists will be encouraged to consider spatial heterogeneity of metabolites in descriptions of metabolic pathway regulation.",
"title": ""
}
] |
scidocsrr
|
7e58060dcc5ecad17ce076b4ed098c05
|
Erratum to: FUGE: A joint meta-heuristic approach to cloud job scheduling algorithm using fuzzy theory and a genetic method
|
[
{
"docid": "5bb390a0c9e95e0691ac4ba07b5eeb9d",
"text": "Clearing the clouds away from the true potential and obstacles posed by this computing capability.",
"title": ""
}
] |
[
{
"docid": "8da2450cbcb9b43d07eee187e5bf07f1",
"text": "We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.",
"title": ""
},
{
"docid": "b426696d7c1764502706696b0d462a34",
"text": "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.",
"title": ""
},
{
"docid": "fe34fcd09a10c382596cffcd13f17a3c",
"text": "As Granular Computing has gained interest, more research has lead into using different representations for Information Granules, i.e., rough sets, intervals, quotient space, fuzzy sets; where each representation offers different approaches to information granulation. These different representations have given more flexibility to what information granulation can achieve. In this overview paper, the focus is only on journal papers where Granular Computing is studied when fuzzy logic systems are used, covering research done with Type-1 Fuzzy Logic Systems, Interval Type-2 Fuzzy Logic Systems, as well as the usage of general concepts of Fuzzy Systems.",
"title": ""
},
{
"docid": "26c4cded1181ce78cc9b61a668e57939",
"text": "Monitoring crop condition and production estimates at the state and county level is of great interest to the U.S. Department of Agriculture. The National Agricultural Statistical Service (NASS) of the U.S. Department of Agriculture conducts field interviews with sampled farm operators and obtains crop cuttings to make crop yield estimates at regional and state levels. NASS needs supplemental spatial data that provides timely information on crop condition and potential yields. In this research, the crop model EPIC (Erosion Productivity Impact Calculator) was adapted for simulations at regional scales. Satellite remotely sensed data provide a real-time assessment of the magnitude and variation of crop condition parameters, and this study investigates the use of these parameters as an input to a crop growth model. This investigation was conducted in the semi-arid region of North Dakota in the southeastern part of the state. The primary objective was to evaluate a method of integrating parameters retrieved from satellite imagery in a crop growth model to simulate spring wheat yields at the sub-county and county levels. The input parameters derived from remotely sensed data provided spatial integrity, as well as a real-time calibration of model simulated parameters during the season, to ensure that the modeled and observed conditions agree. A radiative transfer model, SAIL (Scattered by Arbitrary Inclined Leaves), provided the link between the satellite data and crop model. The model parameters were simulated in a geographic information system grid, which was the platform for aggregating yields at local and regional scales. A model calibration was performed to initialize the model parameters. This calibration was performed using Landsat data over three southeast counties in North Dakota. The model was then used to simulate crop yields for the state of North Dakota with inputs derived from NOAA AVHRR data. The calibration and the state level simulations are compared with spring wheat yields reported by NASS objective yield surveys. Introduction Monitoring agricultural crop conditions during the growing season and estimating the potential crop yields are both important for the assessment of seasonal production. Accurate and timely assessment of particularly decreased production caused by a natural disaster, such as drought or pest infestation, can be critical for countries where the economy is dependent on the crop harvest. Early assessment of yield reductions could avert a disastrous situation and help in strategic planning to meet the demands. The National Agricultural Statistics Service (NASS) of the U.S. Department of Agriculture (USDA) monitors crop conditions in the U.S. and provides monthly projected estimates of crop yield and production. NASS has developed methods to assess crop growth and development from several sources of information, including several types of surveys of farm operators. Field offices in each state are responsible for monitoring the progress and health of the crop and integrating crop condition with local weather information. This crop information is also distributed in a biweekly report on regional weather conditions. NASS provides monthly information to the Agriculture Statistics Board, which assesses the potential yields of all commodities based on crop condition information acquired from different sources. This research complements efforts to independently assess crop condition at the county, agricultural statistics district, and state levels. In the early 1960s, NASS initiated “objective yield” surveys for crops such as corn, soybean, wheat, and cotton in States with the greatest acreages (Allen et al., 1994). These surveys establish small sample units in randomly selected fields which are visited monthly to determine numbers of plants, numbers of fruits (wheat heads, corn ears, soybean pods, etc.), and weight per fruit. Yield forecasting models are based on relationships of samples of the same maturity stage in comparable months during the past four years in each State. Additionally, the Agency implemented a midyear Area Frame that enabled creation of probabilistic based acreage estimates. For major crops, sampling errors are as low as 1 percent at the U.S. level and 2 to 3 percent in the largest producing States. Accurate crop production forecasts require accurate forecasts of acreage at harvest, its geographic distribution, and the associated crop yield determined by local growing conditions. There can be significant year-to-year variability which requires a systematic monitoring capability. To quantify the complex effects of environment, soils, and management practices, both yield and acreage must be assessed at sub-regional levels where a limited range of factors and simple interactions permit modeling and estimation. A yield forecast within homogeneous soil type, land use, crop variety, and climate preclude the necessity for use of a complex forecast model. In 1974, the Large Area Crop Inventory Experiment (LACIE), a joint effort of the National Aeronautics and Space Administration (NASA), the USDA, and the National Oceanic and Atmospheric Administration (NOAA) began to apply satellite remote sensing technology on experimental bases to forecast harvests in important wheat producing areas (MacDonald, 1979). In 1977 LACIE in-season forecasted a 30 percent shortfall in Soviet spring wheat production that came within 10 percent of the official Soviet estimate that came several months after the harvest (Myers, 1983). P H O T O G R A M M E T R I C E N G I N E E R I N G & R E M O T E S E N S I N G Photogrammetric Engineering & Remote Sensing Vol. 69, No. 6, June 2003, pp. 665–674. 0099-1112/03/6906–665$3.00/0 © 2003 American Society for Photogrammetry and Remote Sensing P.C. Doraiswamy and A. Stern are with the USDA, ARS, Hydrology and Remote Sensing Lab, Bldg 007, Rm 104/ BARC West, Beltsville, MD 20705 (pdoraiswamy@ hydrolab.arsusda.gov). Sophie Moulin is with INRA/Unite Climat–Sol–Environnement, Domaine St paul, Site Agroparc, 84914 Avignon Cedex 9, France. P.W. Cook is with the USDA, National Agricultural Statistical Service, Research and Development Division, 3251 Old Lee Highway, Rm 305, Fairfax, VA 22030-1504. IPC_Grams_03-905 4/15/03 1:19 AM Page 1",
"title": ""
},
{
"docid": "56b58efbeab10fa95e0f16ad5924b9e5",
"text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.",
"title": ""
},
{
"docid": "13fed0d1099638f536c5a950e3d54074",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are skipping a question, please include it on your PDF/photo, but leave the question blank and tag it appropriately on Gradescope. This includes extra credit problems. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices. 1. [23 points] Uniform convergence You are hired by CNN to help design the sampling procedure for making their electoral predictions for the next presidential election in the (fictitious) country of Elbania. The country of Elbania is organized into states, and there are only two candidates running in this election: One from the Elbanian Democratic party, and another from the Labor Party of Elbania. The plan for making our electorial predictions is as follows: We'll sample m voters from each state, and ask whether they're voting democrat. We'll then publish, for each state, the estimated fraction of democrat voters. In this problem, we'll work out how many voters we need to sample in order to ensure that we get good predictions with high probability. One reasonable goal might be to set m large enough that, with high probability, we obtain uniformly accurate estimates of the fraction of democrat voters in every state. But this might require surveying very many people, which would be prohibitively expensive. So, we're instead going to demand only a slightly lower degree of accuracy. Specifically, we'll say that our prediction for a state is \" highly inaccurate \" if the estimated fraction of democrat voters differs from the actual fraction of democrat voters within that state by more than a tolerance factor γ. CNN knows that their viewers will tolerate some small number of states' estimates being highly inaccurate; however, their credibility would be damaged if they reported highly inaccurate estimates for too many states. So, rather than …",
"title": ""
},
{
"docid": "a3227034d28c2f2a0f858e1a233ecbc4",
"text": "With the persistent shift towards multi-sourcing, the complexity of service delivery is continuously increasing. This presents new challenges for clients who now have to integrate interdependent services from multiple providers. As other functions, service integration is subject to make-or-buy decisions: clients can either build the required capabilities themselves or delegate service integration to external functions. To define detailed organizational models, one requires understanding of specific tasks and how to allocate them. Based on a qualitative and quantitative expert study, we analyze generic organizational models, and identify key service integration tasks. The allocation of these tasks to clients or their providers generates a set of granular organizational structures. We analyze drivers for delegating these tasks, and develop typical allocations in practice. Our work contributes to expanding the theoretical foundations of service integration. Moreover, our findings will assist clients to design their service integration organization, and to build more effective multi-sourcing solutions.",
"title": ""
},
{
"docid": "ebb024bbd923d35fd86adc2351073a48",
"text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.",
"title": ""
},
{
"docid": "074de6f0c250f5c811b69598551612e4",
"text": "In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on preprocessing such as expensive, and often coarse, scalar field conversion or mesh generation steps. Our method directly takes unsorted particle data as input. It can be further accelerated by identifying fully populated simulation cells during simulation. The extracted surface can be filtered to achieve smooth surface appearance.",
"title": ""
},
{
"docid": "ed9c0cdb74950bf0f1288931707b9d08",
"text": "Introduction This chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment—the Internet, television, newspapers, schools, libraries, bookstores, and social networks—abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996). Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication. A landmark among these efforts was the work of Hovland and colleagues (Hovland, Jannis, & Kelley, 1953; Hovland & Weiss, 1951), who focused on the influence of various characteristics of a source on a recipient's message acceptance. This work was followed by decades of interest in the relative credibility of media involving comparisons between newspapers, radio, television, Communication researchers have tended to focus on sources and media, viewing credibility as a perceived characteristic. Within information science, the focus is on the evaluation of information, most typically instantiated in documents and statements. Here, credibility has been viewed largely as a criterion for relevance judgment, with researchers focusing on how information seekers assess a document's likely level of This brief account highlights an often implicit focus on varying objects …",
"title": ""
},
{
"docid": "0cecb071d4358e60a113a9815272959f",
"text": "Single-cell RNA-Sequencing (scRNA-Seq) has become the most widely used high-throughput method for transcription profiling of individual cells. Systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies. Surprisingly, these issues have received minimal attention in published studies based on scRNA-Seq technology. We examined data from five published studies and found that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we found that the proportion of genes reported as expressed explains a substantial part of observed variability and that this quantity varies systematically across experimental batches. Furthermore, we found that the implemented experimental designs confounded outcomes of interest with batch effects, a design that can bring into question some of the conclusions of these studies. Finally, we propose a simple experimental design that can ameliorate the effect of theses systematic errors have on downstream results. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015; Single-cell RNA-Sequencing (scRNA-Seq) has become the primary tool for profiling the transcriptomes of hundreds or even thousands of individual cells in parallel. Our experience with high-throughput genomic data in general, is that well thought-out data processing pipelines are essential to produce meaningful downstream results. We expect the same to be true for scRNA-seq data. Here we show that while some tools developed for analyzing bulk RNA-Seq can be used for scRNA-Seq data, such as the mapping and alignment software, other steps in the processing, such as normalization, quality control and quantification, require new methods to account for the additional variability that is specific to this technology. One of the most challenging sources of unwanted variability and systematic error in highthroughput data are what are commonly referred to as batch effects. Given the way that scRNASeq experiments are conducted, there is much room for concern regarding batch effects. Specifically, batch effects occur when cells from one biological group or condition are cultured, captured and sequenced separate from cells in a second condition. Although batch information is not always included in the experimental annotations that are publicly available, one can extract surrogate variables from the raw sequencing (FASTQ) files. Namely, the sequencing instrument used, the run number from the instrument and the flow cell lane. Although the sequencing is unlikely to be a major source of unwanted variability, it serves as a surrogate for other experimental procedures that very likely do have an effect, such as starting material, PCR amplification reagents/conditions, and cell cycle stage of the cells. Here we will refer to the resulting differences induced by different groupings of these sources of variability as batch effects. In a completely confounded study, it is not possible to determine if the biological condition or batch effects are driving the observed variation. In contrast, incorporating biological replicates across in the experimental design and processing the replicates across multiple batches permits observed variation to be attributed to biology or batch effects (Figure 1). To demonstrate the widespread problem of systematic bias, batch effects, and confounded experimental designs in scRNA-Seq studies, we surveyed several published data sets. We discuss the consequences of failing to consider the presence of this unwanted technical variability, and consider new strategies to minimize its impact on scRNA-Seq data. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015;",
"title": ""
},
{
"docid": "f2d8e0ae632ec9970351aff34f58badc",
"text": "The high potential of superquadrics as modeling elements for image segmentation tasks has been pointed out for years in the computer vision community. In this work, we employ superquadrics as modeling elements for multiple object segmentation in range images. Segmentation is executed in two stages: First, a hypothesis about the values of the segmentation parameters is generated. Second, the hypothesis is refined locally. In both stages, object boundary and region information are considered. Boundary information is derived via model-based edge detection in the input range image. Hypothesis generation uses boundary information to isolate image regions that can be accurately described by superquadrics. Within hypothesis refinement, a game-theoretic framework is used to fuse the two information sources by associating an objective function to each information source. Iterative optimization of the two objective functions in succession, outputs a precise description of all image objects. We demonstrate experimentally that this approach substantially improves the most established method in superquadric segmentation in terms of accuracy and computational efficiency. We demonstrate the applicability of our segmentation framework in real-world applications by constructing a novel robotic system for automatic unloading of jumbled box-like objects from platforms.",
"title": ""
},
{
"docid": "2f2291baa6c8a74744a16f27df7231d2",
"text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ",
"title": ""
},
{
"docid": "17dfbb112878f4cf4344c5dff195fa18",
"text": "Hybrid vehicle techniques have been widely studied recently because of their potential to significantly improve the fuel economy and drivability of future ground vehicles. Due to the dualpower-source nature of these vehicles, control strategies based on engineering intuition frequently fail to fully explore the potential of these advanced vehicles. In this paper, we will present a procedure for the design of an approximately optimal power management strategy. The design procedure starts by defining a cost function, such as minimizing a combination of fuel consumption and selected emission species over a driving cycle. Dynamic Programming (DP) is then utilized to find the optimal control actions. Through analysis of the behavior of the DP control actions, approximately optimal rules are extracted, which, unlike DP control signals, are implementable. The performance of the power management control strategy is verified by using the hybrid vehicle model HE-VESIM developed at the Automotive Research Center of the University of Michigan. A trade-off study between fuel economy and emissions was performed. It was found that significant emission reduction can be achieved at the expense of a small increase in fuel consumption. Power Management Strategy for a Parallel Hybrid Electric Truck",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "3ab4b094f3e32a4f467a849347157264",
"text": "Overview of geographically explicit momentary assessment research, applied to the study of mental health and well-being, which allows for cross-validation, extension, and enrichment of research on place and health. Building on the historical foundations of both ecological momentary assessment and geographic momentary assessment research, this review explores their emerging synergy into a more generalized and powerful research framework. Geographically explicit momentary assessment methods are rapidly advancing across a number of complimentary literatures that intersect but have not yet converged. Key contributions from these areas reveal tremendous potential for transdisciplinary and translational science. Mobile communication devices are revolutionizing research on mental health and well-being by physically linking momentary experience sampling to objective measures of socio-ecological context in time and place. Methodological standards are not well-established and will be required for transdisciplinary collaboration and scientific inference moving forward.",
"title": ""
},
{
"docid": "7cd8dee294d751ec6c703d628e0db988",
"text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.",
"title": ""
},
{
"docid": "02b764f5b047e3ed6f014f6df7c1c91a",
"text": "Policy learning for partially observed control tasks requires policies that can remember salient information from past observations. In this paper, we present a method for learning policies with internal memory for high-dimensional, continuous systems, such as robotic manipulators. Our approach consists of augmenting the state and action space of the system with continuous-valued memory states that the policy can read from and write to. Learning general-purpose policies with this type of memory representation directly is difficult, because the policy must automatically figure out the most salient information to memorize at each time step. We show that, by decomposing this policy search problem into a trajectory optimization phase and a supervised learning phase through a method called guided policy search, we can acquire policies with effective memorization and recall strategies. Intuitively, the trajectory optimization phase chooses the values of the memory states that will make it easier for the policy to produce the right action in future states, while the supervised learning phase encourages the policy to use memorization actions to produce those memory states. We evaluate our method on tasks involving continuous control in manipulation and navigation settings, and show that our method can learn complex policies that successfully complete a range of tasks that require memory.",
"title": ""
},
{
"docid": "73abeef146be96d979a56a4794a5e130",
"text": "Regular path queries (RPQs) are a fundamental part of recent graph query languages like SPARQL and PGQL. They allow the definition of recursive path structures through regular expressions in a declarative pattern matching environment. We study the use of the K2-tree graph compression technique to materialize RPQ results with low memory consumption for indexing. Compact index representations enable the efficient storage of multiple indexes for varying RPQs.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] |
scidocsrr
|
bad06d4da6237eeb9f836e7be361c431
|
Arabic speech recognition using MFCC feature extraction and ANN classification
|
[
{
"docid": "e7d3fae34553c61827b78e50c2e205ee",
"text": "Speaker Identification (SI) is the process of identifying the speaker from a given utterance by comparing the voice biometrics of the utterance with those utterance models stored beforehand. SI technologies are taken a new direction due to the advances in artificial intelligence and have been used widely in various domains. Feature extraction is one of the most important aspects of SI, which significantly influences the SI process and performance. This systematic review is conducted to identify, compare, and analyze various feature extraction approaches, methods, and algorithms of SI to provide a reference on feature extraction approaches for SI applications and future studies. The review was conducted according to Kitchenham systematic review methodology and guidelines, and provides an in-depth analysis on proposals and implementations of SI feature extraction methods discussed in the literature between year 2011 and 2106. Three research questions were determined and an initial set of 535 publications were identified to answer the questions. After applying exclusion criteria 160 related publications were shortlisted and reviewed in this paper; these papers were considered to answer the research questions. Results indicate that pure Mel-Frequency Cepstral Coefficients (MFCCs) based feature extraction approaches have been used more than any other approach. Furthermore, other MFCC variations, such as MFCC fusion and cleansing approaches, are proven to be very popular as well. This study identified that the current SI research trend is to develop a robust universal SI framework to address the important problems of SI such as adaptability, complexity, multi-lingual recognition, and noise robustness. The results presented in this research are based on past publications, citations, and number of implementations with citations being most relevant. This paper also presents the general process of SI. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "2a7de9a210dd074caebeef62d0a56700",
"text": "We describe a new algorithm to enumerate the k shortest simple (loopless) paths in a directed graph and report on its implementation. Our algorithm is based on a replacement paths algorithm proposed by Hershberger and Suri [2001], and can yield a factor Θ(n) improvement for this problem. But there is a caveat: The fast replacement paths subroutine is known to fail for some directed graphs. However, the failure is easily detected, and so our k shortest paths algorithm optimistically uses the fast subroutine, then switches to a slower but correct algorithm if a failure is detected. Thus, the algorithm achieves its Θ(n) speed advantage only when the optimism is justified. Our empirical results show that the replacement paths failure is a rare phenomenon, and the new algorithm outperforms the current best algorithms; the improvement can be substantial in large graphs. For instance, on GIS map data with about 5,000 nodes and 12,000 edges, our algorithm is 4--8 times faster. In synthetic graphs modeling wireless ad hoc networks, our algorithm is about 20 times faster.",
"title": ""
},
{
"docid": "1f629796e9180c14668e28b83dc30675",
"text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.",
"title": ""
},
{
"docid": "7360c92ef44058694135338acad6838c",
"text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.",
"title": ""
},
{
"docid": "154fce165c43c3e90a172ffc6864ba39",
"text": "BACKGROUND CONTEXT\nSeveral studies report a favorable short-term outcome after nonoperatively treated two-column thoracic or lumbar burst fractures in patients without neurological deficits. Few reports have described the long-term clinical and radiological outcome after these fractures, and none have, to our knowledge, specifically evaluated the long-term outcome of the discs adjacent to the fractured vertebra, often damaged at injury and possibly at an increased risk of height reduction and degeneration with subsequent chronic back pain.\n\n\nPURPOSE\nTo evaluate the long-term clinical and radiological outcome after nonoperatively treated thoracic or lumbar burst fractures in adults, with special attention to posttraumatic radiological disc height reduction.\n\n\nSTUDY DESIGN\nCase series.\n\n\nPATIENT SAMPLE\nSixteen men with a mean age of 31 years (range, 19-44) and 11 women with a mean age of 40 years (range, 23-61) had sustained a thoracic or lumbar burst fracture during the years 1965 to 1973. Four had sustained a burst fracture Denis type A, 18 a Denis type B, 1 a Denis type C, and 4 a Denis type E. Seven of these patients had neurological deficits at injury, all retrospectively classified as Frankel D.\n\n\nOUTCOME MEASURES\nThe clinical outcome was evaluated subjectively with Oswestry score and questions regarding work capacity and objectively with the Frankel scale. The radiological outcome was evaluated with measurements of local kyphosis over the fractured segment, ratios of anterior and posterior vertebral body heights, adjacent disc heights, pedicle widths, sagittal width of the spinal canal, and lateral and anteroposterior displacement.\n\n\nMETHODS\nFrom the radiographical archives of an emergency hospital, all patients with a nonoperatively treated thoracic or lumbar burst fracture during the years 1965 to 1973 were registered. The fracture type, localization, primary treatment, and outcome were evaluated from the old radiographs, referrals, and reports. Twenty-seven individuals were clinically and radiologically evaluated a mean of 27 years (range, 23-41) after the injury.\n\n\nRESULTS\nAt follow-up, 21 former patients reported no or minimal back pain or disability (Oswestry Score mean 4; range, 0-16), whereas 6 former patients (of whom 3 were classified as Frankel D at baseline) reported moderate or severe disability (Oswestry Score mean 39; range, 26-54). Six former patients were classified as Frankel D, and the rest as Frankel E. Local kyphosis had increased by a mean of 3 degrees (p<.05), whereas the discs adjacent to the fractured vertebrae remained unchanged in height during the follow-up.\n\n\nCONCLUSIONS\nNonoperatively treated burst fractures of the thoracic or lumbar spine in adults with or without minor neurological deficits have a predominantly favorable long-term outcome, and there seems to be no increased risk for subsequent disc height reduction in the adjacent discs.",
"title": ""
},
{
"docid": "c6d2371a165acc46029eb4ad42df3270",
"text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "38bd1d3ef5c314b380ad6459392a7fd8",
"text": "Routing Protocol for Low power and Lossy network (RPL) topology attacks can downgrade the network performance significantly by disrupting the optimal protocol structure. To detect such threats, we propose a RPL-specification, obtained by a semi-auto profiling technique that constructs a high-level abstract of operations through network simulation traces, to use as reference for verifying the node behaviors. This specification, including all the legitimate protocol states and transitions with corresponding statistics, will be implemented as a set of rules in the intrusion detection agents, in the form of the cluster heads propagated to monitor the whole network. In order to save resources, we set the cluster members to report related information about itself and other neighbors to the cluster head instead of making the head overhearing all the communication. As a result, information about a cluster member will be reported by different neighbors, which allow the cluster head to do cross-check. We propose to record the sequence in RPL Information Object (DIO) and Information Solicitation (DIS) messages to eliminate the synchronized issue created by the delay in transmitting the report, in which the cluster head only does cross-check on information that come from sources with the same sequence. Simulation results show that the proposed Intrusion Detection System (IDS) has a high accuracy rate in detecting RPL topology attacks, while only creating insignificant overhead (about 6.3%) that enable its scalability in large-scale network.",
"title": ""
},
{
"docid": "3132db67005f04591f93e77a2855caab",
"text": "Money laundering refers to activities pertaining to hiding the true income, evading taxes, or converting illegally earned money for normal use. These activities are often performed through shell companies that masquerade as real companies but where actual the purpose is to launder money. Shell companies are used in all the three phases of money laundering, namely, placement, layering, and integration, often simultaneously. In this paper, we aim to identify shell companies. We propose to use only bank transactions since that is easily available. In particular, we look at all incoming and outgoing transactions from a particular bank account along with its various attributes, and use anomaly detection techniques to identify the accounts that pertain to shell companies. Our aim is to create an initial list of potential shell company candidates which can be investigated by financial experts later. Due to lack of real data, we propose a banking transactions simulator (BTS) to simulate both honest as well as shell company transactions by studying a host of actual real-world fraud cases. We apply anomaly detection algorithms to detect candidate shell companies. Results indicate that we are able to identify the shell companies with a high degree of precision and recall.1",
"title": ""
},
{
"docid": "410a173b55faaad5a7ab01cf6e4d4b69",
"text": "BACKGROUND\nCommunication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference.\n\n\nMETHOD\nFor this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward.\n\n\nRESULTS\nThe CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%).\n\n\nCONCLUSIONS\nThe SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs.",
"title": ""
},
{
"docid": "dea3bce3f636c87fad95f255aceec858",
"text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).",
"title": ""
},
{
"docid": "8499953a543d16f321c2fd97b1edd7a4",
"text": "The purpose of this phenomenological study was to identify commonly occurring factors in filicide-suicide offenders, to describe this phenomenon better, and ultimately to enhance prevention of child murder. Thirty families' files from a county coroner's office were reviewed for commonly occurring factors in cases of filicide-suicide. Parental motives for filicide-suicide included altruistic and acutely psychotic motives. Twice as many fathers as mothers committed filicide-suicide during the study period, and older children were more often victims than infants. Records indicated that parents frequently showed evidence of depression or psychosis and had prior mental health care. The data support the hypothesis that traditional risk factors for violence appear different from commonly occurring factors in filicide-suicide. This descriptive study represents a step toward understanding filicide-suicide risk.",
"title": ""
},
{
"docid": "30155835ff3e74f0beb3c9b84ce9306f",
"text": "Wireless Sensor Networks (WSNs) are gradually adopted in the industrial world due to their advantages over wired networks. In addition to saving cabling costs, WSNs widen the realm of environments feasible for monitoring. They thus add sensing and acting capabilities to objects in the physical world and allow for communication among these objects or with services in the future Internet. However, the acceptance of WSNs by the industrial automation community is impeded by open issues, such as security guarantees and provision of Quality of Service (QoS). To examine both of these perspectives, we select and survey relevant WSN technologies dedicated to industrial automation. We determine QoS requirements and carry out a threat analysis, which act as basis of our evaluation of the current state-of-the-art. According to the results of this evaluation, we identify and discuss open research issues.",
"title": ""
},
{
"docid": "63198927563faa609e6520a01a56b20c",
"text": "A 1.2 V 4 Gb DDR4 SDRAM is presented in a 30 nm CMOS technology. DDR4 SDRAM is developed to raise memory bandwidth with lower power consumption compared with DDR3 SDRAM. Various functions and circuit techniques are newly adopted to reduce power consumption and secure stable transaction. First, dual error detection scheme is proposed to guarantee the reliability of signals. It is composed of cyclic redundancy check (CRC) for DQ channel and command-address (CA) parity for command and address channel. For stable reception of high speed signals, a gain enhanced buffer and PVT tolerant data fetch scheme are adopted for CA and DQ respectively. To reduce the output jitter, the type of delay line is selected depending on data rate at initial stage. As a result, test measurement shows 3.3 Gb/s DDR operation at 1.14 V.",
"title": ""
},
{
"docid": "da9ad1156191f725b1a55f7b886b7746",
"text": "As the quality of natural language generated by artificial intelligence systems improves, writing interfaces can support interventions beyond grammar-checking and spell-checking, such as suggesting content to spark new ideas. To explore the possibility of machine-in-the-loop creative writing, we performed two case studies using two system prototypes, one for short story writing and one for slogan writing. Participants in our studies were asked to write with a machine in the loop or alone (control condition). They assessed their writing and experience through surveys and an open-ended interview. We collected additional assessments of the writing from Amazon Mechanical Turk crowdworkers. Our findings indicate that participants found the process fun and helpful and could envision use cases for future systems. At the same time, machine suggestions do not necessarily lead to better written artifacts. We therefore suggest novel natural language models and design choices that may better support creative writing.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "b9bf838263410114ec85c783d26d92aa",
"text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.",
"title": ""
},
{
"docid": "151f05c2604c60d3b779a7059ed797e6",
"text": "This study used quantitative volumetric magnetic resonance imaging techniques to explore the neuroanatomic correlates of chronic, combat-related posttraumatic stress disorder (PTSD) in seven Vietnam veterans with PTSD compared with seven nonPTSD combat veterans and eight normal nonveterans. Both left and right hippocampi were significantly smaller in the PTSD subjects compared to the Combat Control and Normal subjects, even after adjusting for age, whole brain volume, and lifetime alcohol consumption. There were no statistically significant group differences in intracranial cavity, whole brain, ventricles, ventricle:brain ratio, or amygdala. Subarachnoidal cerebrospinal fluid was increased in both veteran groups. Our finding of decreased hippocampal volume in PTSD subjects is consistent with results of other investigations which utilized only trauma-unexposed control groups. Hippocampal volume was directly correlated with combat exposure, which suggests that traumatic stress may damage the hippocampus. Alternatively, smaller hippocampi volume may be a pre-existing risk factor for combat exposure and/or the development of PTSD upon combat exposure.",
"title": ""
},
{
"docid": "ce74305a30bd322a78b3827921ae7224",
"text": "While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.",
"title": ""
},
{
"docid": "e4f31c3e7da3ad547db5fed522774f0e",
"text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
},
{
"docid": "de455ce971c40fe49d14415cd8164122",
"text": "Cardiovascular disease remains the most common health problem in developed countries, and residual risk after implementing all current therapies is still high. Permanent changes in lifestyle may be hard to achieve and people may not always be motivated enough to make the recommended modifications. Emerging research has explored the application of natural food-based strategies in disease management. In recent years, much focus has been placed on the beneficial effects of fish consumption. Many of the positive effects of fish consumption on dyslipidemia and heart diseases have been attributed to n-3 polyunsaturated fatty acids (n-3 PUFAs, i.e., EPA and DHA); however, fish is also an excellent source of protein and, recently, fish protein hydrolysates containing bioactive peptides have shown promising activities for the prevention/management of cardiovascular disease and associated health complications. The present review will focus on n-3 PUFAs and bioactive peptides effects on cardiovascular disease risk factors. Moreover, since considerable controversy exists regarding the association between n-3 PUFAs and major cardiovascular endpoints, we have also reviewed the main clinical trials supporting or not this association.",
"title": ""
}
] |
scidocsrr
|
85919d20abd30448a6b7840f8fadcbba
|
Active Learning of Pareto Fronts
|
[
{
"docid": "3228d57f3d74f56444ce7fb9ed18e042",
"text": "Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice.",
"title": ""
}
] |
[
{
"docid": "6bab9326dd38f25794525dc852ece818",
"text": "The transformation from high level task speci cation to low level motion control is a fundamental issue in sensorimotor control in animals and robots. This thesis develops a control scheme called virtual model control which addresses this issue. Virtual model control is a motion control language which uses simulations of imagined mechanical components to create forces, which are applied through joint torques, thereby creating the illusion that the components are connected to the robot. Due to the intuitive nature of this technique, designing a virtual model controller requires the same skills as designing the mechanism itself. A high level control system can be cascaded with the low level virtual model controller to modulate the parameters of the virtual mechanisms. Discrete commands from the high level controller would then result in uid motion. An extension of Gardner's Partitioned Actuator Set Control method is developed. This method allows for the speci cation of constraints on the generalized forces which each serial path of a parallel mechanism can apply. Virtual model control has been applied to a bipedal walking robot. A simple algorithm utilizing a simple set of virtual components has successfully compelled the robot to walk eight consecutive steps. Thesis Supervisor: Gill A. Pratt Title: Assistant Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "2ce9d2923b6b8be5027e23fb905e8b4d",
"text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.",
"title": ""
},
{
"docid": "26140dbe32672dc138c46e7fd6f39b1a",
"text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.",
"title": ""
},
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "67da4c8ba04d3911118147b829ba9c50",
"text": "A methodology for the development of a fuzzy expert system (FES) with application to earthquake prediction is presented. The idea is to reproduce the performance of a human expert in earthquake prediction. To do this, at the first step, rules provided by the human expert are used to generate a fuzzy rule base. These rules are then fed into an inference engine to produce a fuzzy inference system (FIS) and to infer the results. In this paper, we have used a Sugeno type fuzzy inference system to build the FES. At the next step, the adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES parameters and improve its performance. The proposed framework is then employed to attain the performance of a human expert used to predict earthquakes in the Zagros area based on the idea of coupled earthquakes. While the prediction results are promising in parts of the testing set, the general performance indicates that prediction methodology based on coupled earthquakes needs more investigation and more complicated reasoning procedure to yield satisfactory predictions.",
"title": ""
},
{
"docid": "d579ed125d3a051069b69f634fffe488",
"text": "Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.",
"title": ""
},
{
"docid": "1971cb1d7876256ecf0342d0a51fe7e7",
"text": "Senescent cells accumulate with aging and at sites of pathology in multiple chronic diseases. Senolytics are drugs that selectively promote apoptosis of senescent cells by temporarily disabling the pro-survival pathways that enable senescent cells to resist the pro-apoptotic, pro-inflammatory factors that they themselves secrete. Reducing senescent cell burden by genetic approaches or by administering senolytics delays or alleviates multiple age- and disease-related adverse phenotypes in preclinical models. Reported senolytics include dasatinib, quercetin, navitoclax (ABT263), and piperlongumine. Here we report that fisetin, a naturally-occurring flavone with low toxicity, and A1331852 and A1155463, selective BCL-XL inhibitors that may have less hematological toxicity than the less specific BCL-2 family inhibitor navitoclax, are senolytic. Fisetin selectively induces apoptosis in senescent but not proliferating human umbilical vein endothelial cells (HUVECs). It is not senolytic in senescent IMR90 cells, a human lung fibroblast strain, or primary human preadipocytes. A1331852 and A1155463 are senolytic in HUVECs and IMR90 cells, but not preadipocytes. These agents may be better candidates for eventual translation into clinical interventions than some existing senolytics, such as navitoclax, which is associated with hematological toxicity.",
"title": ""
},
{
"docid": "941dc605dab6cf9bfe89bedb2b4f00a3",
"text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.",
"title": ""
},
{
"docid": "c10ac9c3117627b2abb87e268f5de6b1",
"text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.",
"title": ""
},
{
"docid": "b07ea7995bb865b226f5834a54c70aa4",
"text": "The explosive growth in the usage of IEEE 802.11 network has resulted in dense deployments in diverse environments. Most recently, the IEEE working group has triggered the IEEE 802.11ax project, which aims to amend the current IEEE 802.11 standard to improve efficiency of dense WLANs. In this paper, we evaluate the Dynamic Sensitivity Control (DSC) Algorithm proposed for IEEE 802.11ax. This algorithm dynamically adjusts the Carrier Sense Threshold (CST) based on the average received signal strength. We show that the aggregate throughput of a dense network utilizing DSC is considerably improved (i.e. up to 20%) when compared with the IEEE 802.11 legacy network.",
"title": ""
},
{
"docid": "f20c0ace77f7b325d2ae4862d300d440",
"text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: xlzheng@zju.edu.cn (X. Zheng), nblin@zju.edu.cn (Z. Lin), alexwang@zju.edu.cn (X. Wang), klin@ece.uci.edu (K.-J. Lin), mnsong@bupt.edu.cn (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e",
"title": ""
},
{
"docid": "37ceb75634c9801e3f83c36a15dc879b",
"text": "Semantic visualization integrates topic modeling and visualization, such that every document is associated with a topic distribution as well as visualization coordinates on a low-dimensional Euclidean space. We address the problem of semantic visualization for short texts. Such documents are increasingly common, including tweets, search snippets, news headlines, or status updates. Due to their short lengths, it is difficult to model semantics as the word co-occurrences in such a corpus are very sparse. Our approach is to incorporate auxiliary information, such as word embeddings from a larger corpus, to supplement the lack of co-occurrences. This requires the development of a novel semantic visualization model that seamlessly integrates visualization coordinates, topic distributions, and word vectors. We propose a model called GaussianSV, which outperforms pipelined baselines that derive topic models and visualization coordinates as disjoint steps, as well as semantic visualization baselines that do not consider word embeddings.",
"title": ""
},
{
"docid": "09c209f1e36dc97458a8edc4a08e5351",
"text": "We proposed neural network architecture based on Convolution Neural Network(CNN) for temporal relation classification in sentence. First, we transformed word into vector by using word embedding. In Feature Extraction, we extracted two type of features. Lexical level feature considered meaning of marked entity and Sentence level feature considered context of the sentence. Window processing was used to reflect local context and Convolution and Max-pooling operation were used for global context. We concatenated both feature vectors and used softmax operation to compute confidence score. Because experiment results didn't outperform the state-of-the-art methods, we suggested some future works to do.",
"title": ""
},
{
"docid": "e23cebac640a47643b3a3249eae62f89",
"text": "Objective: To assess the factors that contribute to impaired quinine clearance in acute falciparum malaria. Patients: Sixteen adult Thai patients with severe or moderately severe falciparum malaria were studied, and 12 were re-studied during convalescence. Methods: The clearance of quinine, dihydroquinine (an impurity comprising up to 10% of commercial quinine formulations), antipyrine (a measure of hepatic mixed-function oxidase activity), indocyanine green (ICG) (a measure of liver blood flow), and iothalamate (a measure of glomerular filtration rate) were measured simultaneously, and the relationship of these values to the␣biotransformation of quinine to the active metabolite 3-hydroxyquinine was assessed. Results: During acute malaria infection, the systemic clearance of quinine, antipyrine and ICG and the biotransformation of quinine to 3-hydroxyquinine were all reduced significantly when compared with values during convalescence. Iothalamate clearance was not affected significantly and did not correlate with the clearance of any of the other compounds. The clearance of total and free quinine correlated significantly with antipyrine clearance (r s = 0.70, P = 0.005 and r s = 0.67, P = 0.013, respectively), but not with ICG clearance (r s = 0.39 and 0.43 respectively, P > 0.15). In a multiple regression model, antipyrine clearance and plasma protein binding accounted for 71% of the variance in total quinine clearance in acute malaria. The pharmacokinetic properties of dihydroquinine were generally similar to those of quinine, although dihydroquinine clearance was less affected by acute malaria. The mean ratio of quinine to 3-hydroxyquinine area under the plasma concentration-time curve (AUC) values in acute malaria was 12.03 compared with 6.92 during convalescence P=0.01. The mean plasma protein binding of 3-hydroxyquinine was 46%, which was significantly lower than that of quinine (90.5%) or dihydroquinine (90.5%). Conclusion: The reduction in quinine clearance in acute malaria results predominantly from a disease-induced dysfunction in hepatic mixed-function oxidase activity (principally CYP 3A) which impairs the conversion of quinine to its major metabolite, 3-hydroxyquinine. The metabolite contributes approximately 5% of the antimalarial activity of the parent compound in malaria, but up to 10% during convalescence.",
"title": ""
},
{
"docid": "48126a601f93eea84b157040c83f8861",
"text": "Citation counts and intra-conference citations are one useful measure of the impact of prior research in a field. We have developed CiteVis, a visualization system for portraying citation data about the IEEE InfoVis Conference and its papers. Rather than use a node-link network visualization, we employ an attribute-based layout along with interaction to foster exploration and knowledge discovery.",
"title": ""
},
{
"docid": "af7803b0061e75659f718d56ba9715b3",
"text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.",
"title": ""
},
{
"docid": "d40aa76e76c44da4c6237f654dcdab45",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "e13d6cd043ea958e9731c99a83b6de18",
"text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.",
"title": ""
},
{
"docid": "6ab8b5bd7ce3582df99d5601225c1779",
"text": "Nowadays, the number of users, speed of internet and processing power of devices are increasing at a tremendous rate. For maintaining the balance between users and company networks with product or service, our system must evolve and modify to handle the future load of data. Currently, we are using file systems, Database servers and some semi-structured file systems. But all these systems are mostly independent, differ from each other in many except and never on the single roof for easy, effective use. So, to minimize the problems for developing apps, website, game development easier, Google came with the solution as their product Firebase. Firebase is implementing a real-time database, crash reporting, authentication, cloud functions, cloud storage, hosting, test-lab, performance monitoring and analytics on a single system platform for speed, security as well as efficiency. Systems like these are also developed by some big companies like Facebook, IBM, Linkedin, etc for their personal use. So we can say that Firebase will have the power to handle the future requirement.",
"title": ""
},
{
"docid": "6476066913e37c88e94cc83c15b05f43",
"text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.",
"title": ""
}
] |
scidocsrr
|
e690711cb18766db09e76ccc5c36c03c
|
VisReduce: Fast and responsive incremental information visualization of large datasets
|
[
{
"docid": "98e170b4beb59720e49916835572d1b0",
"text": "Scatterplot matrices (SPLOMs), parallel coordinates, and glyphs can all be used to visualize the multiple continuous variables (i.e., dependent variables or measures) in multidimensional multivariate data. However, these techniques are not well suited to visualizing many categorical variables (i.e., independent variables or dimensions). To visualize multiple categorical variables, 'hierarchical axes' that 'stack dimensions' have been used in systems like Polaris and Tableau. However, this approach does not scale well beyond a small number of categorical variables. Emerson et al. [8] extend the matrix paradigm of the SPLOM to simultaneously visualize several categorical and continuous variables, displaying many kinds of charts in the matrix depending on the kinds of variables involved. We propose a variant of their technique, called the Generalized Plot Matrix (GPLOM). The GPLOM restricts Emerson et al.'s technique to only three kinds of charts (scatterplots for pairs of continuous variables, heatmaps for pairs of categorical variables, and barcharts for pairings of categorical and continuous variable), in an effort to make it easier to understand. At the same time, the GPLOM extends Emerson et al.'s work by demonstrating interactive techniques suited to the matrix of charts. We discuss the visual design and interactive features of our GPLOM prototype, including a textual search feature allowing users to quickly locate values or variables by name. We also present a user study that compared performance with Tableau and our GPLOM prototype, that found that GPLOM is significantly faster in certain cases, and not significantly slower in other cases.",
"title": ""
}
] |
[
{
"docid": "40b18b69a3a4011f163d06ef476d9954",
"text": "Potential benefits of using online social network data for clinical studies on depression are tremendous. In this paper, we present a preliminary result on building a research framework that utilizes real-time moods of users captured in the Twitter social network and explore the use of language in describing depressive moods. First, we analyzed a random sample of tweets posted by the general Twitter population during a two-month period to explore how depression is talked about in Twitter. A large number of tweets contained detailed information about depressed feelings, status, as well as treatment history. Going forward, we conducted a study on 69 participants to determine whether the use of sentiment words of depressed users differed from a typical user. We found that the use of words related to negative emotions and anger significantly increased among Twitter users with major depressive symptoms compared to those otherwise. However, no difference was found in the use of words related to positive emotions between the two groups. Our work provides several evidences that online social networks provide meaningful data for capturing depressive moods of users.",
"title": ""
},
{
"docid": "db6e0dff6ba7bd5a0041ef4affe50e9b",
"text": "The flipped voltage follower (FVF), a variant of the common-drain transistor amplifier, comprising local feedback, finds application in circuits such as voltage buffers, current mirrors, class AB amplifiers, frequency compensation circuits and low dropout voltage regulators (LDOs). One of the most important characteristics of the FVF, is its low output impedance. In this tutorial-flavored paper, we perform a theoretical analysis of the transfer function, poles and zeros of the output impedance of the FVF and correlate it with transistor-level simulation results. Utilization of the FVF and its variants has wide application in the analog, mixed-signal and power management circuit design space.",
"title": ""
},
{
"docid": "482ff6c78f7b203125781f5947990845",
"text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "58c357c0edd0dfe07ec699d4fba0514b",
"text": "There exist a multitude of execution models available today for a developer to target. The choices vary from general purpose processors to fixed-function hardware accelerators with a large number of variations in-between. There is a growing demand to assess the potential benefits of porting or rewriting an application to a target architecture in order to fully exploit the benefits of performance and/or energy efficiency offered by such targets. However, as a first step of this process, it is necessary to determine whether the application has characteristics suitable for acceleration.\n In this paper, we present Peruse, a tool to characterize the features of loops in an application and to help the programmer understand the amenability of loops for acceleration. We consider a diverse set of features ranging from loop characteristics (e.g., loop exit points) and operation mixes (e.g., control vs data operations) to wider code region characteristics (e.g., idempotency, vectorizability). Peruse is language, architecture, and input independent and uses the intermediate representation of compilers to do the characterization. Using static analyses makes Peruse scalable and enables analysis of large applications to identify and extract interesting loops suitable for acceleration. We show analysis results for unmodified applications from the SPEC CPU benchmark suite, Polybench, and HPC workloads.\n For an end-user it is more desirable to get an estimate of the potential speedup due to acceleration. We use the workload characterization results of Peruse as features and develop a machine-learning based model to predict the potential speedup of a loop when off-loaded to a fixed function hardware accelerator. We use the model to predict the speedup of loops selected by Peruse and achieve an accuracy of 79%.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "1cb47f75cde728f7ba7c75b54516bc46",
"text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis on flap systems. It discusses existing hydraulic and electrohydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance, and life-cycle costs. This paper then progresses to describe a full-scale actuation demonstrator of the flap system, including the high-speed electrical drive, step-down gearbox, and flaps. Detailed descriptions of the fault-tolerant motor, power electronics, control architecture, and position sensor systems are given, along with a range of test results, demonstrating the system in operation.",
"title": ""
},
{
"docid": "d931f6f9960e8688c2339a27148efe74",
"text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-",
"title": ""
},
{
"docid": "a20a03fcb848c310cb966f6e6bc37c86",
"text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.",
"title": ""
},
{
"docid": "45c3d3a765e565ad3b870b95f934592a",
"text": "This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.",
"title": ""
},
{
"docid": "a7c9d58c49f1802b94395c6f12c2d6dd",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7c097c95fb50750c082877ab7e277cd9",
"text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.",
"title": ""
},
{
"docid": "c5f0155b2f6ce35a9cbfa38773042833",
"text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "d558f980b85bf970a7b57c00df361591",
"text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.",
"title": ""
},
{
"docid": "0d11c687fbf4a0834e753145fec7d7d2",
"text": "A single line feed stacked microstrip antenna for 4G system is presented. The proposed antenna with two properly square patches are stacked. The top patch can perform as a driven element is design on 2.44 GHz and lower patch is also design on 2.44 GHz. The performance of proposed antenna for 4G band frequency (2400-2500 MHz). Also gating the improvement of bandwidth (15%) and antenna efficiency (95%) are very high compared to conventional antenna. Key word — Microstrip patch antenna; stacked, 4G, Antenna efficiency.",
"title": ""
},
{
"docid": "3fe3d1f8b5e141b9044686491fffe12f",
"text": "Data stream is a potentially massive, continuous, rapid sequence of data information. It has aroused great concern and research upsurge in the field of data mining. Clustering is an effective tool of data mining, so data stream clustering will undoubtedly become the focus of the study in data stream mining. In view of the characteristic of the high dimension, dynamic, real-time, many effective data stream clustering algorithms have been proposed. In addition, data stream information are not deterministic and always exist outliers and contain noises, so developing effective data stream clustering algorithm is crucial. This paper reviews the development and trend of data stream clustering and analyzes typical data stream clustering algorithms proposed in recent years, such as Birch algorithm, Local Search algorithm, Stream algorithm and CluStream algorithm. We also summarize the latest research achievements in this field and introduce some new strategies to deal with outliers and noise data. At last, we put forward the focal points and difficulties of future research for data stream clustering.",
"title": ""
},
{
"docid": "133af3ba5310a05ac3bfdaf6178feb6f",
"text": "A new gate drive for high-voltage, high-power IGBT has been developed for the SLAC NLC (Next Linear Collider) Solid State Induction Modulator. This paper describes the design and implementation of a driver that allows an IGBT module rated at 800 A/3300 V to switch up to 3000 A at 2200 V in 3 /spl mu/s with a rate of current rise of more than 10000 A//spl mu/s, while still being short circuit protected. Issues regarding fast turn on, high de-saturation voltage detection, and low short circuit peak current are presented. A novel approach is also used to counter the effect of unequal current sharing between parallel chips inside most high-power IGBT modules. It effectively reduces the collector-emitter peak currents and thus protects the IGBT from being destroyed during soft short circuit conditions at high di/dt.",
"title": ""
},
{
"docid": "1830c839960f8ce9b26c906cc21e2a39",
"text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
}
] |
scidocsrr
|
47049efc46eda3078c30357036fa2ddf
|
Multiple object identification with passive RFID tags
|
[
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
},
{
"docid": "9c751a7f274827e3d8687ea520c6e9a9",
"text": "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.",
"title": ""
}
] |
[
{
"docid": "2944000757568f330b495ba2a446b0a0",
"text": "In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70%. Our method has also been submitted for evaluation as part of the Menpo challenge.",
"title": ""
},
{
"docid": "2891ce3327617e9e957488ea21e9a20c",
"text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.",
"title": ""
},
{
"docid": "457f10c4c5d5b748a4f35abd89feb519",
"text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.",
"title": ""
},
{
"docid": "144bb8e869671843cb5d8053e2ee861d",
"text": "We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision.",
"title": ""
},
{
"docid": "39a59eac80c6f4621971399dde2fbb7f",
"text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "47bae1df7bc512e8a458122892e145f8",
"text": "This paper presents an inertial-measurement-unit-based pen (IMUPEN) and its associated trajectory reconstruction algorithm for motion trajectory reconstruction and handwritten digit recognition applications. The IMUPEN is composed of a triaxial accelerometer, two gyroscopes, a microcontroller, and an RF wireless transmission module. Users can hold the IMUPEN to write numerals or draw simple symbols at normal speed. During writing or drawing movements, the inertial signals generated for the movements are transmitted to a computer via the wireless module. A trajectory reconstruction algorithm composed of the procedures of data collection, signal preprocessing, and trajectory reconstruction has been developed for reconstructing the trajectories of movements. In order to minimize the cumulative errors caused by the intrinsic noise/drift of sensors, we have developed an orientation error compensation method and a multiaxis dynamic switch. The advantages of the IMUPEN include the following: 1) It is portable and can be used anywhere without any external reference device or writing ambit limitations, and 2) its trajectory reconstruction algorithm can reduce orientation and integral errors effectively and thus can reconstruct the trajectories of movements accurately. Our experimental results on motion trajectory reconstruction and handwritten digit recognition have successfully validated the effectiveness of the IMUPEN and its trajectory reconstruction algorithm.",
"title": ""
},
{
"docid": "992d71459b616bfe72845493a6f8f910",
"text": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.",
"title": ""
},
{
"docid": "c2a2c29b03ee90558325df7461124092",
"text": "Effective thermal conductivity of mixtures of uids and nanometer-size particles is measured by a steady-state parallel-plate method. The tested uids contain two types of nanoparticles, Al2O3 and CuO, dispersed in water, vacuum pump uid, engine oil, and ethylene glycol. Experimental results show that the thermal conductivities of nanoparticle– uid mixtures are higher than those of the base uids. Using theoretical models of effective thermal conductivity of a mixture, we have demonstrated that the predicted thermal conductivities of nanoparticle– uid mixtures are much lower than our measured data, indicating the de ciency in the existing models when used for nanoparticle– uid mixtures. Possible mechanisms contributing to enhancement of the thermal conductivity of the mixtures are discussed. A more comprehensive theory is needed to fully explain the behavior of nanoparticle– uid mixtures.",
"title": ""
},
{
"docid": "1278d0b3ea3f06f52b2ec6b20205f8d0",
"text": "The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.",
"title": ""
},
{
"docid": "e28c2662f3948d346a00298976d9b37c",
"text": "Analysts engaged in real-time monitoring of cybersecurity incidents must quickly and accurately respond to alerts generated by intrusion detection systems. We investigated two complementary approaches to improving analyst performance on this vigilance task: a graph-based visualization of correlated IDS output and defensible recommendations based on machine learning from historical analyst behavior. We tested our approach with 18 professional cybersecurity analysts using a prototype environment in which we compared the visualization with a conventional tabular display, and the defensible recommendations with limited or no recommendations. Quantitative results showed improved analyst accuracy with the visual display and the defensible recommendations. Additional qualitative data from a \"talk aloud\" protocol illustrated the role of displays and recommendations in analysts' decision-making process. Implications for the design of future online analysis environments are discussed.",
"title": ""
},
{
"docid": "50c762b9e01347df5be904c311e42548",
"text": "This paper introduces redundant spin-transfer-torque (STT) magnetic tunnel junction (MTJ) based nonvolatile flip-flops (NVFFs) for low write-error rate (WER) operations. STT-MTJ NVFFs are key components for ultra-low power VLSI systems thanks to zero standby current, but suffers from write errors due to probabilistic switching, causing a failure backup/restore operation. To reduce the WER, redundant STT-MTJ devices are exploited in the proposed NVFFs. As one-bit information is redundantly represented, it is correctly stored upon a few bit write errors, lowering WERs compared to a conventional NVFF at the same write time. Three different redundant structures are presented and discussed in terms of WER and write energy dissipation. For performance comparisons, the proposed redundant STT-MTJ NVFFs are designed using hybrid 90nm CMOS and MTJ technologies and evaluated using NSSPICE that handles both transistors and MTJs. The simulation results show that the proposed NVFF reduces the write time to 36.2% and the write energy to 70.7% at a WER of 10-12 compared to the conventional NVFF.",
"title": ""
},
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
},
{
"docid": "31fb6df8d386f28b63140ee2ad8d11ea",
"text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.",
"title": ""
},
{
"docid": "3d911d6eeefefd16f898200da0e1a3ef",
"text": "We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.",
"title": ""
},
{
"docid": "1c117c63455c2b674798af0e25e3947c",
"text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.",
"title": ""
},
{
"docid": "df2bc3dce076e3736a195384ae6c9902",
"text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.",
"title": ""
},
{
"docid": "83ee7b71813ead9656e2972e700ade24",
"text": "In many visual domains (like fashion, furniture, etc.) the search for products on online platforms requires matching textual queries to image content. For example, the user provides a search query in natural language (e.g.,pink floral top) and the results obtained are of a different modality (e.g., the set of images of pink floral tops). Recent work on multimodal representation learning enables such cross-modal matching by learning a common representation space for text and image. While such representations ensure that the n-dimensional representation of pink floral top is very close to representation of corresponding images, they do not ensure that the first k1 (< n) dimensions correspond to color, the next k2 (< n) correspond to style and so on. In other words, they learn entangled representations where each dimension does not correspond to a specific attribute. We propose two simple variants which can learn disentangled common representations for the fashion domain wherein each dimension would correspond to a specific attribute (color, style, silhoutte, etc.). Our proposed variants can be integrated with any existing multimodal representation learning method. We use a large fashion dataset of over 700K fashion items crawled from multiple fashion e-commerce portals to evaluate the learned representations on four different applications from the fashion domain, namely, cross-modal image retrieval, visual search, image tagging, and query expansion. Our experimental results show that the proposed variants lead to better performance for each of these applications while learning disentangled representations.",
"title": ""
},
{
"docid": "cea9c1bab28363fc6f225b7843b8df99",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "fba7801d0b187a9a5fbb00c9d4690944",
"text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.",
"title": ""
},
{
"docid": "63063c0a2b08f068c11da6d80236fa87",
"text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.",
"title": ""
}
] |
scidocsrr
|
d2c5e7e28483513056efb2c69fc35df9
|
SQL-IDS: a specification-based approach for SQL-injection detection
|
[
{
"docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54",
"text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.",
"title": ""
},
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
}
] |
[
{
"docid": "e58036f93195603cb7dc7265b9adeb25",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "188ab32548b91fd1bf1edf34ff3d39d9",
"text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.",
"title": ""
},
{
"docid": "bbd64fe2f05e53ca14ad1623fe51cd1c",
"text": "Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. The natural language techniques thus need to be evolved to match the level of power and sophistication that users expect from virtual assistants. In this report we investigate an existing deep learning model for semantic parsing, and we apply it to the problem of converting natural language to trigger-action programs for the Almond virtual assistant. We implement a one layer seq2seq model with attention layer, and experiment with grammar constraints and different RNN cells. We take advantage of its existing dataset and we experiment with different ways to extend the training set. Our parser shows mixed results on the different Almond test sets, performing better than the state of the art on synthetic benchmarks by about 10% but poorer on realistic user data by about 15%. Furthermore, our parser is shown to be extensible to generalization, as well as or better than the current system employed by Almond.",
"title": ""
},
{
"docid": "38935c773fb3163a1841fcec62b3e15a",
"text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.",
"title": ""
},
{
"docid": "bd8b0a2b060594d8513f43fbfe488443",
"text": "Part 1 of the paper presents the detection and sizing capability based on image display of sectorial scan. Examples are given for different types of weld defects: toe cracks, internal porosity, side-wall lack of fusion, underbead crack, inner-surface breaking cracks, slag inclusions, incomplete root penetration and internal cracks. Based on combination of S-scan and B-scan plotted into 3-D isometric part, the defect features could be reconstructed and measured into a draft package. Comparison between plotted data and actual defect sizes are also presented.",
"title": ""
},
{
"docid": "a0a73cc2b884828eb97ff8045bfe50a6",
"text": "A variety of antennas have been engineered with metamaterials (MTMs) and metamaterial-inspired constructs to improve their performance characteristics. Examples include electrically small, near-field resonant parasitic (NFRP) antennas that require no matching network and have high radiation efficiencies. Experimental verification of their predicted behaviors has been obtained. Recent developments with this NFRP electrically small paradigm will be reviewed. They include considerations of increased bandwidths, as well as multiband and multifunctional extensions.",
"title": ""
},
{
"docid": "64a345ae00db3b84fb254725bf14edb7",
"text": "The research interest in unmanned aerial vehicles (UAV) has grown rapidly over the past decade. UAV applications range from purely scientific over civil to military. Technical advances in sensor and signal processing technologies enable the design of light weight and economic airborne platforms. This paper presents a complete mechatronic design process of a quadrotor UAV, including mechanical design, modeling of quadrotor and actuator dynamics and attitude stabilization control. Robust attitude estimation is achieved by fusion of low-cost MEMS accelerometer and gyroscope signals with a Kalman filter. Experiments with a gimbal mounted quadrotor testbed allow a quantitative analysis and comparision of the PID and Integral-Backstepping (IB) controller design for attitude stabilization with respect to reference signal tracking, disturbance rejection and robustness.",
"title": ""
},
{
"docid": "6097315ac2e4475e8afd8919d390babf",
"text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.",
"title": ""
},
{
"docid": "dc71729ebd3c2a66c73b16685c8d12af",
"text": "A list of related materials, with annotations to guide further exploration of the article's ideas and applications 11 Further Reading A company's bid to rally an industry ecosystem around a new competitive view is an uncertain gambit. But the right strategic approaches and the availability of modern digital infrastructures improve the odds for success.",
"title": ""
},
{
"docid": "6384a691d3b50e252ab76a61e28f012e",
"text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.",
"title": ""
},
{
"docid": "104c9ef558234250d56ef941f09d6a7c",
"text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus",
"title": ""
},
{
"docid": "ca94b1bb1f4102ed6b4506441b2431fc",
"text": "It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.",
"title": ""
},
{
"docid": "322f6321bc34750344064d474206fddb",
"text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.",
"title": ""
},
{
"docid": "7448b45dd5809618c3b6bb667cb1004f",
"text": "We first provide criteria for assessing informed consent online. Then we examine how cookie technology and Web browser designs have responded to concerns about informed consent. Specifically, we document relevant design changes in Netscape Navigator and Internet Explorer over a 5-year period, starting in 1995. Our retrospective analyses leads us to conclude that while cookie technology has improved over time regarding informed consent, some startling problems remain. We specify six of these problems and offer design remedies. This work fits within the emerging field of Value-Sensitive Design.",
"title": ""
},
{
"docid": "4e8c39eaa7444158a79573481b80a77f",
"text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.",
"title": ""
},
{
"docid": "5fd2d67291f7957eee20495c5baeb1ef",
"text": "Many interesting real-world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture’s spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example-based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non-uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.",
"title": ""
},
{
"docid": "763372dc4ebc2cd972a5b851be014bba",
"text": "Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this representation from a closely spaced set of points that approximate the desired curve, such as the input from a digitizing tablet or a scanner. This paper presents a solution to the problem of automatically generating efficient piecewise parametric cubic polynomial approximations to shapes from sampled data. We have developed an algorithm that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares. Combining this algorithm with dynamic programming techniques to determine the knot placement gives good results over a range of shapes and applications.",
"title": ""
},
{
"docid": "221541e0ef8cf6cd493843fd53257a62",
"text": "Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification. Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia. Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval. In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field. We first give the definition of partial retrieval and discuss its desirable capabilities. Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits. We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance. Finally, we discuss possible research directions to address partial shape retrieval.",
"title": ""
},
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
},
{
"docid": "f37d9a57fd9100323c70876cf7a1d7ad",
"text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
32262952ce4d4b250f0be1985e087814
|
Runtime Prediction for Scale-Out Data Analytics
|
[
{
"docid": "66f684ba92fe735fecfbfb53571bad5f",
"text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.",
"title": ""
},
{
"docid": "a50ec2ab9d5d313253c6656049d608b3",
"text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.",
"title": ""
},
{
"docid": "6c2a0afc5a93fe4d73661a3f50fab126",
"text": "As massive data acquisition and storage becomes increasingly a↵ordable, a wide variety of enterprises are employing statisticians to engage in sophisticated data analysis. In this paper we highlight the emerging practice of Magnetic, Agile, Deep (MAD) data analysis as a radical departure from traditional Enterprise Data Warehouses and Business Intelligence. We present our design philosophy, techniques and experience providing MAD analytics for one of the world’s largest advertising networks at Fox Audience Network, using the Greenplum parallel database system. We describe database design methodologies that support the agile working style of analysts in these settings. We present dataparallel algorithms for sophisticated statistical techniques, with a focus on density methods. Finally, we reflect on database system features that enable agile design and flexible algorithm development using both SQL and MapReduce interfaces over a variety of storage mechanisms.",
"title": ""
}
] |
[
{
"docid": "df7a68ebb9bc03d8a73a54ab3474373f",
"text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.",
"title": ""
},
{
"docid": "2f9ebb8992542b8d342642b6ea361b54",
"text": "Falsifying Financial Statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses, or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A Jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.",
"title": ""
},
{
"docid": "e96b49a1ee9dd65bb920507d65810501",
"text": "The objective of this paper is to compare the time specification performance between conventional controller PID and modern controller SMC for an inverted pendulum system. The goal is to determine which control strategy delivers better performance with respect to pendulum’s angle and cart’s position. The inverted pendulum represents a challenging control problem, which continually moves toward an uncontrolled state. Two controllers are presented such as Sliding Mode Control (SMC) and ProportionalIntegral-Derivatives (PID) controllers for controlling the highly nonlinear system of inverted pendulum model. Simulation study has been done in Matlab Mfile and simulink environment shows that both controllers are capable to control multi output inverted pendulum system successfully. The result shows that Sliding Mode Control (SMC) produced better response compared to PID control strategies and the responses are presented in time domain with the details analysis. Keywords—SMC, PID, Inverted Pendulum System.",
"title": ""
},
{
"docid": "9f362249c508abe7f0146158d9370395",
"text": "A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant. Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary.",
"title": ""
},
{
"docid": "719b4c5352d94d5ae52172b3c8a2512d",
"text": "Acts of violence account for an estimated 1.43 million deaths worldwide annually. While violence can occur in many contexts, individual acts of aggression account for the majority of instances. In some individuals, repetitive acts of aggression are grounded in an underlying neurobiological susceptibility that is just beginning to be understood. The failure of \"top-down\" control systems in the prefrontal cortex to modulate aggressive acts that are triggered by anger provoking stimuli appears to play an important role. An imbalance between prefrontal regulatory influences and hyper-responsivity of the amygdala and other limbic regions involved in affective evaluation are implicated. Insufficient serotonergic facilitation of \"top-down\" control, excessive catecholaminergic stimulation, and subcortical imbalances of glutamatergic/gabaminergic systems as well as pathology in neuropeptide systems involved in the regulation of affiliative behavior may contribute to abnormalities in this circuitry. Thus, pharmacological interventions such as mood stabilizers, which dampen limbic irritability, or selective serotonin reuptake inhibitors (SSRIs), which may enhance \"top-down\" control, as well as psychosocial interventions to develop alternative coping skills and reinforce reflective delays may be therapeutic.",
"title": ""
},
{
"docid": "b57006686160241bf118c2c638971764",
"text": "Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.",
"title": ""
},
{
"docid": "55aa10937266b6f24157b87a9ecc6e34",
"text": "For thousands of years, honey has been used for medicinal applications. The beneficial effects of honey, particularly its anti-microbial activity represent it as a useful option for management of various wounds. Honey contains major amounts of carbohydrates, lipids, amino acids, proteins, vitamin and minerals that have important roles in wound healing with minimum trauma during redressing. Because bees have different nutritional behavior and collect the nourishments from different and various plants, the produced honeys have different compositions. Thus different types of honey have different medicinal value leading to different effects on wound healing. This review clarifies the mechanisms and therapeutic properties of honey on wound healing. The mechanisms of action of honey in wound healing are majorly due to its hydrogen peroxide, high osmolality, acidity, non-peroxide factors, nitric oxide and phenols. Laboratory studies and clinical trials have shown that honey promotes autolytic debridement, stimulates growth of wound tissues and stimulates anti-inflammatory activities thus accelerates the wound healing processes. Compared with topical agents such as hydrofiber silver or silver sulfadiazine, honey is more effective in elimination of microbial contamination, reduction of wound area, promotion of re-epithelialization. In addition, honey improves the outcome of the wound healing by reducing the incidence and excessive scar formation. Therefore, application of honey can be an effective and economical approach in managing large and complicated wounds.",
"title": ""
},
{
"docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f",
"text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.",
"title": ""
},
{
"docid": "d72f47ad136ebb9c74abe484980b212f",
"text": "This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.",
"title": ""
},
{
"docid": "3fb8519ca0de4871b105df5c5d8e489f",
"text": "Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.",
"title": ""
},
{
"docid": "5717a94b8dd53e42bc96c4e1444d5903",
"text": "A spoken dialogue system (SDS) is a specialised form of computer system that operates as an interface between users and the application, using spoken natural language as the primary means of communication. The motivation for spoken interaction with such systems is that it allows for a natural and efficient means of communication. It is for this reason that the use of an SDS has been considered as a means for furthering development of DST Group’s Consensus project by providing an engaging spoken interface to high-level information fusion software. This document provides a general overview of the key issues surrounding the development of such interfaces.",
"title": ""
},
{
"docid": "0870519536e7229f861323bd4a44c4d2",
"text": "It has become increasingly common for websites and computer media to provide computer generated visual images, called avatars, to represent users and bots during online interactions. In this study, participants (N=255) evaluated a series of avatars in a static context in terms of their androgyny, anthropomorphism, credibility, homophily, attraction, and the likelihood they would choose them during an interaction. The responses to the images were consistent with what would be predicted by uncertainty reduction theory. The results show that the masculinity or femininity (lack of androgyny) of an avatar, as well as anthropomorphism, significantly influence perceptions of avatars. Further, more anthropomorphic avatars were perceived to be more attractive and credible, and people were more likely to choose to be represented by them. Participants reported masculine avatars as less attractive than feminine avatars, and most people reported a preference for human avatars that matched their gender. Practical and theoretical implications of these results for users, designers, and researchers of avatars are discussed.",
"title": ""
},
{
"docid": "b30af7c9565effd44f433abc62e1ff14",
"text": "Feedback on designs is critical for helping users iterate toward effective solutions. This paper presents Voyant, a novel system giving users access to a non-expert crowd to receive perception-oriented feedback on their designs from a selected audience. Based on a formative study, the system generates the elements seen in a design, the order in which elements are noticed, impressions formed when the design is first viewed, and interpretation of the design relative to guidelines in the domain and the user's stated goals. An evaluation of the system was conducted with users and their designs. Users reported the feedback about impressions and interpretation of their goals was most helpful, though the other feedback types were also valued. Users found the coordinated views in Voyant useful for analyzing relations between the crowd's perception of a design and the visual elements within it. The cost of generating the feedback was considered a reasonable tradeoff for not having to organize critiques or interrupt peers.",
"title": ""
},
{
"docid": "96f4f77f114fec7eca22d0721c5efcbe",
"text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.",
"title": ""
},
{
"docid": "a88b5c0c627643e0d7b17649ac391859",
"text": "Abduction is a useful decision problem that is related to diagnostics. Given some observation in form of a set of axioms, that is not entailed by a knowledge base, we are looking for explanations, sets of axioms, that can be added to the knowledge base in order to entail the observation. ABox abduction limits both observations and explanations to ABox assertions. In this work we focus on direct tableau-based approach to answer ABox abduction. We develop an ABox abduction algorithm for the ALCHO DL, that is based on Reiter’s minimal hitting set algorithm. We focus on the class of explanations allowing atomic and negated atomic concept assertions, role assertions, and negated role assertions. The algorithm is sound and complete for this class. The algorithm was also implemented, on top of the Pellet reasoner.",
"title": ""
},
{
"docid": "f783860e569d9f179466977db544bd01",
"text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.",
"title": ""
},
{
"docid": "f14757e2e1d893b5cc0c7498f531d0e0",
"text": "A new irradiation facility has been developed in the RA-3 reactor in order to perform trials for the treatment of liver metastases using boron neutron capture therapy (BNCT). RA-3 is a production research reactor that works continuously five days a week. It had a thermal column with a small cross section access tunnel that was not accessible during operation. The objective of the work was to perform the necessary modifications to obtain a facility for irradiating a portion of the human liver. This irradiation facility must be operated without disrupting the normal reactor schedule and requires a highly thermalized neutron spectrum, a thermal flux of around 10(10) n cm(-2)s(-1) that is as isotropic and uniform as possible, as well as on-line instrumentation. The main modifications consist of enlarging the access tunnel inside the thermal column to the suitable dimensions, reducing the gamma dose rate at the irradiation position, and constructing properly shielded entrance gates enabled by logical control to safely irradiate and withdraw samples with the reactor at full power. Activation foils and a neutron shielded graphite ionization chamber were used for a preliminary in-air characterization of the irradiation site. The constructed facility is very practical and easy to use. Operational authorization was obtained from radioprotection personnel after confirming radiation levels did not significantly increase after the modification. A highly thermalized and homogenous irradiation field was obtained. Measurements in the empty cavity showed a thermal flux near 10(10) n cm(-2)s(-1), a cadmium ratio of 4100 for gold foils and a gamma dose rate of approximately 5 Gy h(-1).",
"title": ""
},
{
"docid": "799904b20f1174f01c0d2dd87c57e097",
"text": "ix",
"title": ""
},
{
"docid": "90c8deec8869977ac5e3feb9a6037569",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read memory a contribution to experimental psychology now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
},
{
"docid": "723bfb5acef53d78a05660e5d9710228",
"text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.",
"title": ""
}
] |
scidocsrr
|
40c24a69387dd3269018b94f2ee88032
|
University of Mannheim @ CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity
|
[
{
"docid": "16de36d6bf6db7c294287355a44d0f61",
"text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
}
] |
[
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "d350335bab7278f5c8c0d9ceb0e6b50b",
"text": "New remote sensing sensors now acquire high spatial and spectral Satellite Image Time Series (SITS) of the world. These series of images are a key component of classification systems that aim at obtaining up-to-date and accurate land cover maps of the Earth’s surfaces. More specifically, the combination of the temporal, spectral and spatial resolutions of new SITS makes possible to monitor vegetation dynamics. Although traditional classification algorithms, such as Random Forest (RF), have been successfully applied for SITS classification, these algorithms do not make the most of the temporal domain. Conversely, some approaches that take into account the temporal dimension have recently been tested, especially Recurrent Neural Networks (RNNs). This paper proposes an exhaustive study of another deep learning approaches, namely Temporal Convolutional Neural Networks (TempCNNs) where convolutions are applied in the temporal dimension. The goal is to quantitatively and qualitatively evaluate the contribution of TempCNNs for SITS classification. This paper proposes a set of experiments performed on one million time series extracted from 46 Formosat-2 images. The experimental results show that TempCNNs are more accurate than RF and RNNs, that are the current state of the art for SITS classification. We also highlight some differences with results obtained in computer vision, e.g. about pooling layers. Moreover, we provide some general guidelines on the network architecture, common regularization mechanisms, and hyper-parameter values such as batch size. Finally, we assess the visual quality of the land cover maps produced by TempCNNs.",
"title": ""
},
{
"docid": "4db9cf56991edae0f5ca34546a8052c4",
"text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:",
"title": ""
},
{
"docid": "25bd9169c68ff39ee3a7edbdb65f1aa2",
"text": "Social networks such as Twitter and Facebook are important and widely used communication environments that exhibit scale, complexity, node interaction, and emergent behavior. In this paper, we analyze emergent behavior in Twitter and propose a definition of emergent behavior focused on the pervasiveness of a topic within a community. We extend an existing stochastic model for user behavior, focusing on advocate-follower relationships. The new user posting model includes retweets, replies, and mentions as user responses. To capture emergence, we propose a RPBS (Rising, Plateau, Burst and Stabilization) topic pervasiveness model with a new metric that captures how frequent and in what form the community is talking about a particular topic. Our initial validation compares our model with four Twitter datasets. Our extensive experimental analysis allows us to explore several “what-if” scenarios with respect to topic and knowledge sharing, showing how a pervasive topic evolves given various popularity scenarios.",
"title": ""
},
{
"docid": "e9f9d022007833ab7ae928619641e1b1",
"text": "BACKGROUND\nDissemination and implementation of health care interventions are currently hampered by the variable quality of reporting of implementation research. Reporting of other study types has been improved by the introduction of reporting standards (e.g. CONSORT). We are therefore developing guidelines for reporting implementation studies (StaRI).\n\n\nMETHODS\nUsing established methodology for developing health research reporting guidelines, we systematically reviewed the literature to generate items for a checklist of reporting standards. We then recruited an international, multidisciplinary panel for an e-Delphi consensus-building exercise which comprised an initial open round to revise/suggest a list of potential items for scoring in the subsequent two scoring rounds (scale 1 to 9). Consensus was defined a priori as 80% agreement with the priority scores of 7, 8, or 9.\n\n\nRESULTS\nWe identified eight papers from the literature review from which we derived 36 potential items. We recruited 23 experts to the e-Delphi panel. Open round comments resulted in revisions, and 47 items went forward to the scoring rounds. Thirty-five items achieved consensus: 19 achieved 100% agreement. Prioritised items addressed the need to: provide an evidence-based justification for implementation; describe the setting, professional/service requirements, eligible population and intervention in detail; measure process and clinical outcomes at population level (using routine data); report impact on health care resources; describe local adaptations to the implementation strategy and describe barriers/facilitators. Over-arching themes from the free-text comments included balancing the need for detailed descriptions of interventions with publishing constraints, addressing the dual aims of reporting on the process of implementation and effectiveness of the intervention and monitoring fidelity to an intervention whilst encouraging adaptation to suit diverse local contexts.\n\n\nCONCLUSIONS\nWe have identified priority items for reporting implementation studies and key issues for further discussion. An international, multidisciplinary workshop, where participants will debate the issues raised, clarify specific items and develop StaRI standards that fit within the suite of EQUATOR reporting guidelines, is planned.\n\n\nREGISTRATION\nThe protocol is registered with Equator: http://www.equator-network.org/library/reporting-guidelines-under-development/#17 .",
"title": ""
},
{
"docid": "e2cf52f0625af866c8842fb3d5c49d04",
"text": "Human immunodeficiency virus type 1 (HIV-1) can infect nondividing cells via passing through the nuclear pore complex. The nuclear membrane-imbedded protein SUN2 was recently reported to be involved in the nuclear import of HIV-1. Whether SUN1, which shares many functional similarities with SUN2, is involved in this process remained to be explored. Here we report that overexpression of SUN1 specifically inhibited infection by HIV-1 but not that by simian immunodeficiency virus (SIV) or murine leukemia virus (MLV). Overexpression of SUN1 did not affect reverse transcription but led to reduced accumulation of the 2-long-terminal-repeat (2-LTR) circular DNA and integrated viral DNA, suggesting a block in the process of nuclear import. HIV-1 CA was mapped as a determinant for viral sensitivity to SUN1. Treatment of SUN1-expressing cells with cyclosporine (CsA) significantly reduced the sensitivity of the virus to SUN1, and an HIV-1 mutant containing CA-G89A, which does not interact with cyclophilin A (CypA), was resistant to SUN1 overexpression. Downregulation of endogenous SUN1 inhibited the nuclear entry of the wild-type virus but not that of the G89A mutant. These results indicate that SUN1 participates in the HIV-1 nuclear entry process in a manner dependent on the interaction of CA with CypA.IMPORTANCE HIV-1 infects both dividing and nondividing cells. The viral preintegration complex (PIC) can enter the nucleus through the nuclear pore complex. It has been well known that the viral protein CA plays an important role in determining the pathways by which the PIC enters the nucleus. In addition, the interaction between CA and the cellular protein CypA has been reported to be important in the selection of nuclear entry pathways, though the underlying mechanisms are not very clear. Here we show that both SUN1 overexpression and downregulation inhibited HIV-1 nuclear entry. CA played an important role in determining the sensitivity of the virus to SUN1: the regulatory activity of SUN1 toward HIV-1 relied on the interaction between CA and CypA. These results help to explain how SUN1 is involved in the HIV-1 nuclear entry process.",
"title": ""
},
{
"docid": "345e46da9fc01a100f10165e82d9ca65",
"text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "7e264804d56cab24454c59fe73b51884",
"text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.",
"title": ""
},
{
"docid": "d19503f965e637089d9fa200329f1349",
"text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.",
"title": ""
},
{
"docid": "58b957db2e72d76e5ee1fc5102df7dc1",
"text": "This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.",
"title": ""
},
{
"docid": "ba966c2fc67b88d26a3030763d56ed1a",
"text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.",
"title": ""
},
{
"docid": "84963fdc37a3beb8eebc8d5626b53428",
"text": "A fundamental assumption in software security is that memory contents do not change unless there is a legitimate deliberate modification. Classical fault attacks show that this assumption does not hold if the attacker has physical access. Rowhammer attacks showed that local code execution is already sufficient to break this assumption. Rowhammer exploits parasitic effects in DRAM tomodify the content of a memory cell without accessing it. Instead, other memory locations are accessed at a high frequency. All Rowhammer attacks so far were local attacks, running either in a scripted language or native code. In this paper, we present Nethammer. Nethammer is the first truly remote Rowhammer attack, without a single attacker-controlled line of code on the targeted system. Systems that use uncached memory or flush instructions while handling network requests, e.g., for interaction with the network device, can be attacked using Nethammer. Other systems can still be attacked if they are protected with quality-of-service techniques like Intel CAT. We demonstrate that the frequency of the cache misses is in all three cases high enough to induce bit flips. We evaluated different bit flip scenarios. Depending on the location, the bit flip compromises either the security and integrity of the system and the data of its users, or it can leave persistent damage on the system, i.e., persistent denial of service. We investigated Nethammer on personal computers, servers, and mobile phones. Nethammer is a security landslide, making the formerly local attack a remote attack. With this work we invalidate all defenses and mitigation strategies against Rowhammer build upon the assumption of a local attacker. Consequently, this paradigm shift impacts the security of millions of devices where the attacker is not able to execute attacker-controlled code. Nethammer requires threat models to be re-evaluated for most network-connected systems. We discuss state-of-the-art countermeasures and show that most of them have no effect on our attack, including the targetrow-refresh (TRR) countermeasure of modern hardware. Disclaimer: This work on Rowhammer attacks over the network was conducted independently and unaware of other research groups working on truly remote Rowhammer attacks. Experiments and observations presented in this paper, predate the publication of the Throwhammer attack by Tatar et al. [81]. We will thoroughly study the differences between both papers and compare the advantages and disadvantages in a future version of this paper.",
"title": ""
},
{
"docid": "7d7c596d334153f11098d9562753a1ee",
"text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.",
"title": ""
},
{
"docid": "8914e1a38db6b47f4705f0c684350d38",
"text": "Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency.",
"title": ""
},
{
"docid": "62d63357923c5a7b1ea21b8448e3cba3",
"text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.",
"title": ""
},
{
"docid": "21822a9c37a315e6282200fe605debfe",
"text": "This paper provides a survey on speech recognition and discusses the techniques and system that enables computers to accept speech as input. This paper shows the major developments in the field of speech recognition. This paper highlights the speech recognition techniques and provides a brief description about the four stages in which the speech recognition techniques are classified. In addition, this paper gives a description of four feature extraction techniques: Linear Predictive Coding (LPC), Mel-frequency cepstrum (MFFCs), RASTA filtering and Probabilistic Linear Discriminate Analysis (PLDA). The objective of this paper is to summarize the feature extraction techniques used in speech recognition system.",
"title": ""
},
{
"docid": "732fd5463462d11451d78d97dc821d78",
"text": "Since sensors have limited range and coverage, mobile robots often have to make decisions on where to point their sensors. A good sensing strategy allows a robot to collect information that is useful for its tasks. Most existing solutions to this active sensing problem choose the direction that maximally reduces the uncertainty in a single state variable. In more complex problem domains, however, uncertainties exist in multiple state variables, and they affect the performance of the robot in different ways. The robot thus needs to have more sophisticated sensing strategies in order to decide which uncertainties to reduce, and to make the correct trade-offs. In this work, we apply a least squares reinforcement learning method to solve this problem. We implemented and tested the learning approach in the RoboCup domain, where the robot attempts to reach a ball and accurately kick it into the goal. We present experimental results that suggest our approach is able to learn highly effective sensing strategies.",
"title": ""
}
] |
scidocsrr
|
ee92ea3d8841fa379ff3ff4b3bf68fcb
|
Puberty suppression in gender identity disorder: the Amsterdam experience
|
[
{
"docid": "fe2b8921623f3bcf7b8789853b45e912",
"text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.",
"title": ""
},
{
"docid": "3f292307824ed0b4d7fd59824ff9dd2b",
"text": "The aim of this qualitative study was to obtain a better understanding of the developmental trajectories of persistence and desistence of childhood gender dysphoria and the psychosexual outcome of gender dysphoric children. Twenty five adolescents (M age 15.88, range 14-18), diagnosed with a Gender Identity Disorder (DSM-IV or DSM-IV-TR) in childhood, participated in this study. Data were collected by means of biographical interviews. Adolescents with persisting gender dysphoria (persisters) and those in whom the gender dysphoria remitted (desisters) indicated that they considered the period between 10 and 13 years of age to be crucial. They reported that in this period they became increasingly aware of the persistence or desistence of their childhood gender dysphoria. Both persisters and desisters stated that the changes in their social environment, the anticipated and actual feminization or masculinization of their bodies, and the first experiences of falling in love and sexual attraction had influenced their gender related interests and behaviour, feelings of gender discomfort and gender identification. Although, both persisters and desisters reported a desire to be the other gender during childhood years, the underlying motives of their desire seemed to be different.",
"title": ""
}
] |
[
{
"docid": "51e78c504a3977ea7e706da7e3a06c25",
"text": "This work introduces an affordance characterization employing mechanical wrenches as a metric for predicting and planning with workspace affordances. Although affordances are a commonly used high-level paradigm for robotic task-level planning and learning, the literature has been sparse regarding how to characterize the agent in this object-agent-environment framework. In this work, we propose decomposing a behavior into a vocabulary of characteristic requirements and capabilities that are suitable to predict the affordances of various parts of the workspace. Specifically, we investigate mechanical wrenches as a viable representation of these affordance requirements and capabilities. We then use this vocabulary in a planning system to compose complex motions from simple behavior types in continuous space. The utility of the framework for complex planning is demonstrated on example scenarios both in simulation and with real-world industrial manipulators.",
"title": ""
},
{
"docid": "0eb659fd66ad677f90019f7214aae7e8",
"text": "In this article a relational database schema for a bibliometric database is developed. After the introduction explaining the motivation to use relational databases in bibliometrics, an overview of the related literature is given. A review of typical bibliometric questions serves as an informal requirement analysis. The database schema is developed as an entity-relationship diagram using the structural information typically found in scientific articles. Several SQL queries for the tasks presented in the requirement analysis show the usefulness of the developed database schema.",
"title": ""
},
{
"docid": "d74df8673db783ff80d01f2ccc0fe5bf",
"text": "The search for strategies to mitigate undesirable economic, ecological, and social effects of harmful resource consumption has become an important, socially relevant topic. An obvious starting point for businesses that wish to make value creation more sustainable is to increase the utilization rates of existing resources. Modern social Internet technology is an effective means by which to achieve IT-enabled sharing services, which make idle resource capacity owned by one entity accessible to others who need them but do not want to own them. Successful sharing services require synchronized participation of providers and users of resources. The antecedents of the participation behavior of providers and users has not been systematically addressed by the extant literature. This article therefore proposes a model that explains and predicts the participation behavior in sharing services. Our search for a theoretical foundation revealed the Theory of Planned Behavior as most appropriate lens, because this theory enables us to integrate provider behavior and user behavior as constituents of participation behavior. The model is novel for that it is the first attempt to study the interdependencies between the behavior types in sharing service participation and for that it includes both general and specific determinants of the participation behavior.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "b4c5ddab0cb3e850273275843d1f264f",
"text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"title": ""
},
{
"docid": "e96fddd8058e3dc98eb9f73aa387c9f9",
"text": "There is often the need to perform sentiment classification in a particular domain where no labeled document is available. Although we could make use of a general-purpose off-the-shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. In this paper, we explore the possibility of building domain-specific sentiment classifiers with unlabeled documents only. Our investigation indicates that in the word embeddings learned from the unlabeled corpus of a given domain, the distributed word representations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. Exploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific sentiment lexicon from just a few typical sentiment words (“seeds”). An important finding is that simple linear model based supervised learning algorithms (such as linear SVM) can actually work better than more sophisticated semi-supervised/transductive learning algorithms which represent the state-of-the-art technique for sentiment lexicon induction. The induced lexicon could be applied directly in a lexicon-based method for sentiment classification, but a higher performance could be achieved through a two-phase bootstrapping method which uses the induced lexicon to assign positive/negative sentiment scores to unlabeled documents first, a nd t hen u ses those documents found to have clear sentiment signals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as LSTM). On several benchmark datasets for document sentiment classification, our end-to-end pipelined approach which is overall unsupervised (except for a tiny set of seed words) outperforms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches.",
"title": ""
},
{
"docid": "5a07f2e8b28d788673800ff22a6b99b4",
"text": "Recently , we introduced a software linearization technique for frequency-modulated continuous-wave (FMCW) radar applications using a nonlinear direct digital synthesizer based frequency source. In this letter, we present a method that uses this unconventional, cost efficient, basically nonlinear synthesizer concept, but is capable of linearizing the frequency chirp directly in hardware by means of defined sweep predistortion. Additionally, the concept is extended for the generation of defined nonlinear frequency courses and verified on measurements with a 2.45-GHz FMCW radar prototype",
"title": ""
},
{
"docid": "b552bfedda08c1d040e34472117a15bd",
"text": "Four hundred and fiftynine students from 20 different high school classrooms in Michigan participated in focus group discussions about the character strengths included in the Values in Action Classification. Students were interested in the subject of good character and able to discuss with candor and sophistication instances of each strength. They were especially drawn to the positive traits of leadership, practical intelligence, wisdom, social intelligence, love of learning, spirituality, and the capacity to love and be loved. Students believed that strengths were largely acquired rather than innate and that these strengths developed through ongoing life experience as opposed to formal instruction. They cited an almost complete lack of contemporary role models exemplifying different strengths of character. Implications of these findings for the quantitative assessment of positive traits were discussed, as were implications for designing character education programs for adolescents. We suggest that peers can be an especially important force in encouraging the development and display of good character among youth.",
"title": ""
},
{
"docid": "7916a261319dad5f257a0b8e0fa97fec",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "64221753135508ef3d041e0aab83039a",
"text": "Cryptocurrency platforms such as Bitcoin and Ethereum have become more popular due to decentralized control and the promise of anonymity. Ethereum is particularly powerful due to its support for smart contracts which are implemented through Turing complete scripting languages and digital tokens that represent fungible tradable goods. It is necessary to understand whether de-anonymization is feasible to quantify the promise of anonymity. Cryptocurrencies are increasingly being used in online black markets like Silk Road and ransomware like CryptoLocker and WannaCry. In this paper, we propose a model for persisting transactions from Ethereum into a graph database, Neo4j. We propose leveraging graph compute or analytics against the transactions persisted into a graph database.",
"title": ""
},
{
"docid": "7957ba93e63f753336281fcb31e35cab",
"text": "This paper proposed a method that combines Polar Fourier Transform, color moments, and vein features to retrieve leaf images based on a leaf image. The method is very useful to help people in recognizing foliage plants. Foliage plants are plants that have various colors and unique patterns in the leaf. Therefore, the colors and its patterns are information that should be counted on in the processing of plant identification. To compare the performance of retrieving system to other result, the experiments used Flavia dataset, which is very popular in recognizing plants. The result shows that the method gave better performance than PNN, SVM, and Fourier Transform. The method was also tested using foliage plants with various colors. The accuracy was 90.80% for 50 kinds of plants.",
"title": ""
},
{
"docid": "84b0d19d5d383ea3fd99e20740ebf5d6",
"text": "We propose a robust proactive threshold signature scheme, a multisignature scheme and a blind signature scheme which work in any Gap Diffie-Hellman (GDH) group (where the Computational Diffie-Hellman problem is hard but the Decisional Diffie-Hellman problem is easy). Our constructions are based on the recently proposed GDH signature scheme of Boneh et al. [BLS]. Due to the nice properties of GDH groups and of the base scheme, it turns out that most of our constructions are much simpler, more efficient and have more useful characteristics than similar existing constructions. We support all the proposed schemes with proofs under the appropriate computational assumptions, using the corresponding notions of security.",
"title": ""
},
{
"docid": "6570f9b4f8db85f40a99fb1911aa4967",
"text": "Honey bees have played a major role in the history and development of humankind, in particular for nutrition and agriculture. The most important role of the western honey bee (Apis mellifera) is that of pollination. A large amount of crops consumed throughout the world today are pollinated by the activity of the honey bee. It is estimated that the total value of these crops stands at 155 billion euro annually. The goal of the work outlined in this paper was to use wireless sensor network technology to monitor a colony within the beehive with the aim of collecting image and audio data. These data allows the beekeeper to obtain a much more comprehensive view of the in-hive conditions, an indication of flight direction, as well as monitoring the hive outside of the traditional beekeeping times, i.e. during the night, poor weather, and winter months. This paper outlines the design of a fully autonomous beehive monitoring system which provided image and sound monitoring of the internal chambers of the hive, as well as a warning system for emergency events such as possible piping, dramatically increased hive activity, or physical damage to the hive. The final design included three wireless nodes: a digital infrared camera with processing capabilities for collecting imagery of the hive interior; an external thermal imaging camera node for monitoring the colony status and activity, and an accelerometer and a microphone connected to an off the shelf microcontroller node for processing. The system allows complex analysis and sensor fusion. Some scenarios based on sound processing, image collection, and accelerometers are presented. Power management was implemented which allowed the system to achieve energy neutrality in an outdoor deployment with a 525 × 345 mm solar panel.",
"title": ""
},
{
"docid": "404bd4b3c7756c87805fa286415aac43",
"text": "Although key techniques for next-generation wireless communication have been explored separately, relatively little work has been done to investigate their potential cooperation for performance optimization. To address this problem, we propose a holistic framework for robust 5G communication based on multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM). More specifically, we design a new framework that supports: 1) index modulation based on OFDM (OFDM–M) [1]; 2) sub-band beamforming and channel estimation to achieve massive path gains by exploiting multiple antenna arrays [2]; and 3) sub-band pre-distortion for peak-to-average-power-ratio (PAPR) reduction [3] to significantly decrease the PAPR and communication errors in OFDM-IM by supporting a linear behavior of the power amplifier in the modem. The performance of the proposed framework is evaluated against the state-of-the-art QPSK, OFDM-IM [1] and QPSK-spatiotemporal QPSK-ST [2] schemes. The results show that our framework reduces the bit error rate (BER), mean square error (MSE) and PAPR compared to the baselines by approximately 6–13dB, 8–13dB, and 50%, respectively.",
"title": ""
},
{
"docid": "447f7e2ddc5607019cd53716abbbb4d4",
"text": "In recent years, massive amounts of identified and unidentified facial data have become available—often publicly so—through Web 2.0 applications. So have also the infrastructure and technologies necessary to navigate through those data in real time, matching individuals across online services, independently of their knowledge or consent. In the literature on statistical re-identification [5, 6], an identified database is pinned against an unidentified database in order to recognize individuals in the latter and associate them with information from the former. Many online services make available to visitors identified facial images: social networks such as Facebook and LinkedIn, online services such as Amazon.com profiles, or organizational rosters. Consider Facebook, for example. Most active Facebook users (currently estimated at 1.35 billion monthly active users worldwide [7], with over 250 billion photos uploaded photos [8]) use photos of themselves as their primary profile image. These photos are often identifiable: Facebook has pursued a ‘real identity’ policy, under which members are expected to appear on the network under their real names under penalty of account cancellation [9]. Using tagging features and login security questions, Facebook has encouraged users to associate their and their friends’ names to uploaded photos. Facebook photos are also frequently publicly available. Primary profile photos must be shared with strangers un-",
"title": ""
},
{
"docid": "6c5a5bc775316efc278285d96107ddc6",
"text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.",
"title": ""
},
{
"docid": "64406c6b0e45eb49743f0789dcb89029",
"text": "Hand gesture is one of the typical methods used in sign language for non-verbal communication. Sign gestures are a non-verbal visual language, different from the spoken language, but serving the same function. It is often very difficult for the hearing impaired community to communicate their ideas and creativity to the normal humans. This paper presents a system that will not only automatically recognize the hand gestures but also convert it into corresponding speech output so that speaking impaired person can easily communicate with normal people. The gesture to speech system, G2S, has been developed using the skin colour segmentation. The system consists of camera attached to computer that will take images of hand gestures. Image segmentation & feature extraction algorithm is used to recognize the hand gestures of the signer. According to recognized hand gestures, corresponding pre-recorded sound track will be played.",
"title": ""
},
{
"docid": "b5fd22854e75a29507cde380999705a2",
"text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.",
"title": ""
},
{
"docid": "a4dea5e491657e1ba042219401ebcf39",
"text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.",
"title": ""
},
{
"docid": "77cfb72acbc2f077c3d9b909b0a79e76",
"text": "In this paper, we analyze two general-purpose encoding types, trees and graphs systematically, focusing on trends over increasingly complex problems. Tree and graph encodings are similar in application but offer distinct advantages and disadvantages in genetic programming. We describe two implementations and discuss their evolvability. We then compare performance using symbolic regression on hundreds of random nonlinear target functions of both 1-dimensional and 8-dimensional cases. Results show the graph encoding has less bias for bloating solutions but is slower to converge and deleterious crossovers are more frequent. The graph encoding however is found to have computational benefits, suggesting it to be an advantageous trade-off between regression performance and computational effort.",
"title": ""
}
] |
scidocsrr
|
02a3b81a7117985ca5b91ab8868070a6
|
Towards Neural Theorem Proving at Scale Anonymous
|
[
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "98cc792a4fdc23819c877634489d7298",
"text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"title": ""
}
] |
[
{
"docid": "9a63a5db2a40df78a436e7be87f42ff7",
"text": "A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "f013f58d995693a79cd986a028faff38",
"text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.",
"title": ""
},
{
"docid": "f97d81a177ca629da5fe0d707aec4b8a",
"text": "This paper highlights the two machine learning approaches, viz. Rough Sets and Decision Trees (DT), for the prediction of Learning Disabilities (LD) in school-age children, with an emphasis on applications of data mining. Learning disability prediction is a very complicated task. By using these two approaches, we can easily and accurately predict LD in any child and also we can determine the best classification method. In this study, in rough sets the attribute reduction and classification are performed using Johnson’s reduction algorithm and Naive Bayes algorithm respectively for rule mining and in construction of decision trees, J48 algorithm is used. From this study, it is concluded that, the performance of decision trees are considerably poorer in several important aspects compared to rough sets. It is found that, for selection of attributes, rough sets is very useful especially in the case of inconsistent data and it also gives the information about the attribute correlation which is very important in the case of learning disability.",
"title": ""
},
{
"docid": "5d154a62b22415cbedd165002853315b",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "d6586a261e22e9044425cb27462c3435",
"text": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/ rrg/bayesian_learning_high_speed_nav.",
"title": ""
},
{
"docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13",
"text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",
"title": ""
},
{
"docid": "5371c5b8e9db3334ed144be4354336cc",
"text": "E-learning is related to virtualised distance learning by means of electronic communication mechanisms, using its functionality as a support in the process of teaching-learning. When the learning process becomes computerised, educational data mining employs the information generated from the electronic sources to enrich the learning model for academic purposes. To provide support to e-learning systems, cloud computing is set as a natural platform, as it can be dynamically adapted by presenting a scalable system for the changing necessities of the computer resources over time. It also eases the implementation of data mining techniques to work in a distributed scenario, regarding the large databases generated from e-learning. We give an overview of the current state of the structure of cloud computing, and we provide details of the most common infrastructures that have been developed for such a system. We also present some examples of e-learning approaches for cloud computing, and finally, we discuss the suitability of this environment for educational data mining, suggesting the migration of this approach to this computational scenario.",
"title": ""
},
{
"docid": "768749e22e03aecb29385e39353dd445",
"text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.",
"title": ""
},
{
"docid": "85605e6617a68dff216f242f31306eac",
"text": "Steered molecular dynamics (SMD) permits efficient investigations of molecular processes by focusing on selected degrees of freedom. We explain how one can, in the framework of SMD, employ Jarzynski's equality (also known as the nonequilibrium work relation) to calculate potentials of mean force (PMF). We outline the theory that serves this purpose and connects nonequilibrium processes (such as SMD simulations) with equilibrium properties (such as the PMF). We review the derivation of Jarzynski's equality, generalize it to isobaric--isothermal processes, and discuss its implications in relation to the second law of thermodynamics and computer simulations. In the relevant regime of steering by means of stiff springs, we demonstrate that the work on the system is Gaussian-distributed regardless of the speed of the process simulated. In this case, the cumulant expansion of Jarzynski's equality can be safely terminated at second order. We illustrate the PMF calculation method for an exemplary simulation and demonstrate the Gaussian nature of the resulting work distribution.",
"title": ""
},
{
"docid": "d509cb384ecddafa0c4f866882af2c77",
"text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.",
"title": ""
},
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "aeaee20b184e346cd469204dcf49d815",
"text": "Naresh Kumari , Nitin Malik , A. N. Jha , Gaddam Mallesham #*4 # Department of Electrical, Electronics and Communication Engineering, The NorthCap University, Gurgaon, India 1 nareshkumari@ncuindia.edu 2 nitinmalik77@gmail.com * Ex-Professor, Electrical Engineering, Indian Institute of Technology, New Delhi, India 3 anjha@ee.iitd.ac.in #* Department of Electrical Engineering, Osmania University, Hyderabad, India 4 gm.eed.cs@gmail.com",
"title": ""
},
{
"docid": "6ebce4adb3693070cac01614078d68fc",
"text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.",
"title": ""
},
{
"docid": "28e8bc5b0d1fa9fa46b19c8c821a625c",
"text": "This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.",
"title": ""
},
{
"docid": "645f320514b0fa5a8b122c4635bc3df6",
"text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.",
"title": ""
},
{
"docid": "a85511bfaa47701350f4d97ec94453fd",
"text": "We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method.",
"title": ""
},
{
"docid": "bb0dce17b5810ebd7173ea35545c3bf6",
"text": "Five studies demonstrated that highly guilt-prone people may avoid forming interdependent partnerships with others whom they perceive to be more competent than themselves, as benefitting a partner less than the partner benefits one's self could trigger feelings of guilt. Highly guilt-prone people who lacked expertise in a domain were less willing than were those low in guilt proneness who lacked expertise in that domain to create outcome-interdependent relationships with people who possessed domain-specific expertise. These highly guilt-prone people were more likely than others both to opt to be paid on their performance alone (Studies 1, 3, 4, and 5) and to opt to be paid on the basis of the average of their performance and that of others whose competence was more similar to their own (Studies 2 and 5). Guilt proneness did not predict people's willingness to form outcome-interdependent relationships with potential partners who lacked domain-specific expertise (Studies 4 and 5). It also did not predict people's willingness to form relationships when poor individual performance would not negatively affect partner outcomes (Study 4). Guilt proneness therefore predicts whether, and with whom, people develop interdependent relationships. The findings also demonstrate that highly guilt-prone people sacrifice financial gain out of concern about how their actions would influence others' welfare. As such, the findings demonstrate a novel way in which guilt proneness limits free-riding and therefore reduces the incidence of potentially unethical behavior. Lastly, the findings demonstrate that people who lack competence may not always seek out competence in others when choosing partners.",
"title": ""
},
{
"docid": "a9a8baf6dfb2526d75b0d7e49bb9b138",
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"title": ""
},
{
"docid": "890236dc21eef6d0523ee1f5e91bf784",
"text": "Perhaps the most amazing property of these word embeddings is that somehow these vector encodings effectively capture the semantic meanings of the words. The question one might ask is how or why? The answer is that because the vectors adhere surprisingly well to our intuition. For instance, words that we know to be synonyms tend to have similar vectors in terms of cosine similarity and antonyms tend to have dissimilar vectors. Even more surprisingly, word vectors tend to obey the laws of analogy. For example, consider the analogy ”Woman is to queen as man is to king”. It turns out that",
"title": ""
}
] |
scidocsrr
|
79746946cd66c344af505c1977c9d15d
|
A 12-bit 20 MS/s 56.3 mW Pipelined ADC With Interpolation-Based Nonlinear Calibration
|
[
{
"docid": "96d0cfd6349e02a90528b40c5e3decc6",
"text": "A 16-bit 125 MS/s pipeline analog-to-digital converter (ADC) implemented in a 0.18 ¿m CMOS process is presented in this paper. A SHA-less 4-bit front-end is used to achieve low power and minimize the size of the input sampling capacitance in order to ease drivability. The ADC includes foreground factory digital calibration to correct for capacitor mismatches and dithering that can be optionally enabled to improve small-signal linearity. This ADC achieves an SNR of 78.7 dB, an SNDR of 78.6 dB and an SFDR of 96 dB with a 30 MHz input signal, while maintaining an SNR > 76 dB and an SFDR > 85 dB up to 150 MHz input signals. Further, with dithering enabled the worst spur is <-98 dB for inputs below -4 dBFS at 100 MHz IF. The ADC consumes 385 mW from a 1.8 V supply.",
"title": ""
}
] |
[
{
"docid": "4d396614420b24265d05b265b7ae6cd5",
"text": "The objective of this study was to characterise the antagonistic activity of cellular components of potential probiotic bacteria isolated from the gut of healthy rohu (Labeo rohita), a tropical freshwater fish, against the fish pathogen, Aeromonas hydrophila. Three potential probiotic strains (referred to as R1, R2, and R5) were screened using a well diffusion, and their antagonistic activity against A. hydrophila was determined. Biochemical tests and 16S rRNA gene analysis confirmed that R1, R2, and R5 were Lactobacillus plantarum VSG3, Pseudomonas aeruginosa VSG2, and Bacillus subtilis VSG1, respectively. Four different fractions of cellular components (i.e. the whole-cell product, heat-killed whole-cell product [HKWCP], intracellular product [ICP], and extracellular product) of these selected strains were effective in an in vitro sensitivity test against 6 A. hydrophila strains. Among the cellular components, the ICP of R1, HKWCP of R2, and ICP of R5 exhibited the strongest antagonistic activities, as evidenced by their inhibition zones. The antimicrobial compounds from these selected cellular components were partially purified by thin-layer and high-performance liquid chromatography, and their properties were analysed. The ranges of pH stability of the purified compounds were wide (3.0-10.0), and compounds were thermally stable up to 90 °C. Considering these results, isolated probiotic strains may find potential applications in the prevention and treatment of aquatic aeromonosis.",
"title": ""
},
{
"docid": "66c49b0dbdbdf29ace0f60839b867e43",
"text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.",
"title": ""
},
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "023ad4427627e7bdb63ba5e15c3dff32",
"text": "Recent works have been shown effective in using neural networks for Chinese word segmentation. However, these models rely on large-scale data and are less effective for low-resource datasets because of insufficient training data. Thus, we propose a transfer learning method to improve low-resource word segmentation by leveraging high-resource corpora. First, we train a teacher model on high-resource corpora and then use the learned knowledge to initialize a student model. Second, a weighted data similarity method is proposed to train the student model on low-resource data with the help of highresource corpora. Finally, given that insufficient data puts forward higher requirements for feature extraction, we propose a novel neural network which improves feature learning. Experiment results show that our work significantly improves the performance on low-resource datasets: 2.3% and 1.5% F-score on PKU and CTB datasets. Furthermore, this paper achieves state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets1. Besides, we explore an asynchronous parallel method on neural word segmentation to speed up training. The parallel method accelerates training substantially and is almost five times faster than a serial mode.",
"title": ""
},
{
"docid": "e68fc0a0522f7cd22c7071896263a1f4",
"text": "OBJECTIVES\nThe aim of this study was to evaluate the costs of subsidized care for an adult population provided by private and public sector dentists.\n\n\nMETHODS\nA sample of 210 patients was drawn systematically from the waiting list for nonemergency dental treatment in the city of Turku. Questionnaire data covering sociodemographic background, dental care utilization and marginal time cost estimates were combined with data from patient registers on treatment given. Information was available on 104 patients (52 from each of the public and the private sectors).\n\n\nRESULTS\nThe overall time taken to provide treatment was 181 days in the public sector and 80 days in the private sector (P<0.002). On average, public sector patients had significantly (P < 0.01) more dental visits (5.33) than private sector patients (3.47), which caused higher visiting fees. In addition, patients in the public sector also had higher other out-of-pocket costs than in the private sector. Those who needed emergency dental treatment during the waiting time for comprehensive care had significantly more costly treatment and higher total costs than the other patients. Overall time required for dental visits significantly increased total costs. The total cost of dental care in the public sector was slightly higher (P<0.05) than in the private sector.\n\n\nCONCLUSIONS\nThere is no direct evidence of moral hazard on the provider side from this study. The observed cost differences between the two sectors may indicate that private practitioners could manage their publicly funded patients more quickly than their private paying patients. On the other hand, private dentists providing more treatment per visit could be explained by private dentists providing more than is needed by increasing the content per visit.",
"title": ""
},
{
"docid": "d956c805ee88d1b0ca33ce3f0f838441",
"text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1",
"title": ""
},
{
"docid": "8b49149b3288b9565263b7c4d6978378",
"text": "This paper produces a baseline security analysis of the Cloud Computing Operational Environment in terms of threats, vulnerabilities and impacts. An analysis is conducted and the top three threats are identified with recommendations for practitioners. The conclusion of the analysis is that the most serious threats are non-technical and can be solved via management processes rather than technical countermeasures.",
"title": ""
},
{
"docid": "c27b61685ae43c7cd1b60ca33ab209df",
"text": "The establishment of damper settings that provide an optimal compromise between wobble- and weave-mode damping is discussed. The conventional steering damper is replaced with a network of interconnected mechanical components comprised of springs, dampers and inerters - that retain the virtue of the damper, while improving the weave-mode performance. The improved performance is due to the fact that the network introduces phase compensation between the relative angular velocity of the steering system and the resulting steering technique",
"title": ""
},
{
"docid": "7f848facaa535d53e7a6fe7aa2435473",
"text": "The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a \"pyramid,\" which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.",
"title": ""
},
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "633d32667221f53def4558db23a8b8af",
"text": "In this paper we present, ARCTREES, a novel way of visualizing hierarchical and non-hierarchical relations within one interactive visualization. Such a visualization is challenging because it must display hierarchical information in a way that the user can keep his or her mental map of the data set and include relational information without causing misinterpretation. We propose a hierarchical view derived from traditional Treemaps and augment this view with an arc diagram to depict relations. In addition, we present interaction methods that allow the exploration of the data set using Focus+Context techniques for navigation. The development was motivated by a need for understanding relations in structured documents but it is also useful in many other application domains such as project management and calendars.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "c10a83c838f59adeb50608d5b96c0fbc",
"text": "Robots are typically equipped with multiple complementary sensors such as cameras and laser range finders. Camera generally provides dense 2D information while range sensors give sparse and accurate depth information in the form of a set of 3D points. In order to represent the different data sources in a common coordinate system, extrinsic calibration is needed. This paper presents a pipeline for extrinsic calibration a zed setero camera with Velodyne LiDAR puck using a novel self-made 3D marker whose edges can be robustly detected in the image and 3d point cloud. Our approach first estimate the large sensor displacement using just a single frame. then we optimize the coarse results by finding the best align of edges in order to obtain a more accurate calibration. Finally, the ratio of the 3D points correctly projected onto proper image segments is used to evaluate the accuracy of calibration.",
"title": ""
},
{
"docid": "eda3987f781263615ccf53dd9a7d1a27",
"text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.",
"title": ""
},
{
"docid": "f36348f2909a9642c18590fca6c9b046",
"text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.",
"title": ""
},
{
"docid": "7c11bd23338b6261f44319198fcdc082",
"text": "Zooplankton are quite significant to the ocean ecosystem for stabilizing balance of the ecosystem and keeping the earth running normally. Considering the significance of zooplantkon, research about zooplankton has caught more and more attentions. And zooplankton recognition has shown great potential for science studies and mearsuring applications. However, manual recognition on zooplankton is labour-intensive and time-consuming, and requires professional knowledge and experiences, which can not scale to large-scale studies. Deep learning approach has achieved remarkable performance in a number of object recognition benchmarks, often achieveing the current best performance on detection or classification tasks and the method demonstrates very promising and plausible results in many applications. In this paper, we explore a deep learning architecture: ZooplanktoNet to classify zoolankton automatically and effectively. The deep network is characterized by capturing more general and representative features than previous predefined feature extraction algorithms in challenging classification. Also, we incorporate some data augmentation to aim at reducing the overfitting for lacking of zooplankton images. And we decide the zooplankton class according to the highest score in the final predictions of ZooplanktoNet. Experimental results demonstrate that ZooplanktoNet can solve the problem effectively with accuracy of 93.7% in zooplankton classification.",
"title": ""
},
{
"docid": "c86aad62e950d7c10f93699d421492d5",
"text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
}
] |
scidocsrr
|
5e06328e2a74b35fe5b70d5bffb0c06c
|
Clone Detection Using Abstract Syntax Suffix Trees
|
[
{
"docid": "a17052726cbf3239c3f516b51af66c75",
"text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.",
"title": ""
},
{
"docid": "b09eedfc1b27d5666846c18423d1ad54",
"text": "Recent years have seen many significant advances in program comprehension and software maintenance automation technology. In spite of the enormous potential savings in software maintenance costs, for the most part adoption of these ideas in industry remains at the experimental prototype stage. In this paper I explore some of the practical reasons for industrial resistance to adoption of software maintenance automation. Based on the experience of six years of software maintenance automation services to the financial industry involving more than 4.5 Gloc of code at Legasys Corporation, I discuss some of the social, technical and business realities that lie at the root of this resistance, outline various Legasys attempts overcome these barriers, and suggest some approaches to software maintenance automation that may lead to higher levels of industrial acceptance in the future.",
"title": ""
}
] |
[
{
"docid": "dd634fe7f5bfb5d08d0230c3e64220a4",
"text": "Living in an oxygenated environment has required the evolution of effective cellular strategies to detect and detoxify metabolites of molecular oxygen known as reactive oxygen species. Here we review evidence that the appropriate and inappropriate production of oxidants, together with the ability of organisms to respond to oxidative stress, is intricately connected to ageing and life span.",
"title": ""
},
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: chintha@ece.ualberta.ca Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "d4ac0d6890cc89e2525b9537376cce39",
"text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.",
"title": ""
},
{
"docid": "95efc564448b3ec74842d047f94cb779",
"text": "Over the past 25 years or so there has been much interest in the use of digital pre-distortion (DPD) techniques for the linearization of RF and microwave power amplifiers. In this paper, we describe the important system and hardware requirements for the four main subsystems found in the DPD linearized transmitter: RF/analog, data converters, digital signal processing, and the DPD architecture and algorithms, and illustrate how the overall DPD system architecture is influenced by the design choices that may be made in each of these subsystems. We shall also consider the challenges presented to future applications of DPD systems for wireless communications, such as higher operating frequencies, wider signal bandwidths, greater spectral efficiency signals, resulting in higher peak-to-average power ratios, multiband and multimode operation, lower power consumption requirements, faster adaption, and how these affect the system design choices.",
"title": ""
},
{
"docid": "ed0342748fff5c1ced69700cfd922884",
"text": "Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms",
"title": ""
},
{
"docid": "6c5c6e201e2ae886908aff554866b9ed",
"text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.",
"title": ""
},
{
"docid": "827c9d65c2c3a2a39d07c9df7a21cfe2",
"text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.",
"title": ""
},
{
"docid": "1f3e600ce5be2a55234c11e19e11cb67",
"text": "In this paper, we propose a noise robust speech recognition system built using generalized distillation framework. It is assumed that during training, in addition to the training data, some kind of ”privileged” information is available and can be used to guide the training process. This allows to obtain a system which at test time outperforms those built on regular training data alone. In the case of noisy speech recognition task, the privileged information is obtained from a model, called ”teacher”, trained on clean speech only. The regular model, called ”student”, is trained on noisy utterances and uses teacher’s output for the corresponding clean utterances. Thus, for this framework a parallel clean/noisy speech data are required. We experimented on the Aurora2 database which provides such kind of data. Our system uses hybrid DNN-HMM acoustic model where neural networks provide HMM state probabilities during decoding. The teacher DNN is trained on the clean data, while the student DNN is trained using multi-condition (various SNRs) data. The student DNN loss function combines the targets obtained from forced alignment of the training data and the outputs of the teacher DNN when fed with the corresponding clean features. Experimental results clearly show that distillation framework is effective and allows to achieve significant reduction in the word error rate.",
"title": ""
},
{
"docid": "4c5d12c3b1254c83819eac53dd57ce40",
"text": "traditional topic detection method can not be applied to the microblog topic detection directly, because the microblog text is a kind of the short, fractional and grass-roots text. In order to detect the hot topic in the microblog text effectively, we propose a microblog topic detection method based on the combination of the latent semantic analysis and the structural property. According to the dialogic property of the microblog, our proposed method firstly creates semantic space based on the replies to the thread, with the aim to solve the data sparseness problem; secondly, create the microblog model based on the latent semantic analysis; finally, propose a semantic computation method combined with the time information. We then adopt the agglomerative hierarchical clustering method as the microblog topic detection method. Experimental results show that our proposed methods improve the performances of the microblog topic detection greatly.",
"title": ""
},
{
"docid": "a31358ffda425f8e3f7fd15646d04417",
"text": "We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "ea05a43abee762d4b484b5027e02a03a",
"text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.",
"title": ""
},
{
"docid": "551e890f5b62ed3fbcaef10101787120",
"text": "Plagiarism detection is a sensitive field of research which has gained lot of interest in the past few years. Although plagiarism detection systems are developed to check text in a variety of languages, they perform better when they are dedicated to check a specific language as they take into account the specificity of the language which leads to better quality results. Query optimization and document reduction constitute two major processing modules which play a major role in optimizing the response time and the results quality of these systems and hence determine their efficiency and effectiveness. This paper proposes an analysis of approaches, an architecture, and a system for detecting plagiarism in Arabic documents. This analysis is particularly focused on the methods and techniques used to detect plagiarism. The proposed web-based architecture exhibits the major processing modules of a plagiarism detection system which are articulated into four layers inside a processing component. The architecture has been used to develop a plagiarism detection system for the Arabic language proposing a set of functions to the user for checking a text and analyzing the results through a well-designed graphical user interface. Subject Categories and Descriptors [H.3.1 Content Analysis and Indexing]: Linguistic processing; [I.2 Artificial Intelligencd]; Natural language interfaces: [I.2.7 Natural Language Processing]; Text Analysis; [I.2.3 Clustering]; Similarity Measures General Terms: Text Analysis, Arabic Language Processing, Similarity Detection",
"title": ""
},
{
"docid": "cdc3b46933db0c88f482ded1dcdff9e6",
"text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.",
"title": ""
},
{
"docid": "e0ee22a0df1c13511909cb5f7d2b4d82",
"text": "Growing use of the Internet as a major means of communication has led to the formation of cyber-communities, which have become increasingly appealing to terrorist groups due to the unregulated nature of Internet communication. Online communities enable violent extremists to increase recruitment by allowing them to build personal relationships with a worldwide audience capable of accessing uncensored content. This article presents methods for identifying the recruitment activities of violent groups within extremist social media websites. Specifically, these methods apply known techniques within supervised learning and natural language processing to the untested task of automatically identifying forum posts intended to recruit new violent extremist members. We used data from the western jihadist website Ansar AlJihad Network, which was compiled by the University of Arizona’s Dark Web Project. Multiple judges manually annotated a sample of these data, marking 192 randomly sampled posts as recruiting (Yes) or non-recruiting (No). We observed significant agreement between the judges’ labels; Cohen’s κ=(0.5,0.9) at p=0.01. We tested the feasibility of using naive Bayes models, logistic regression, classification trees, boosting, and support vector machines (SVM) to classify the forum posts. Evaluation with receiver operating characteristic (ROC) curves shows that our SVM classifier achieves an 89% area under the curve (AUC), a significant improvement over the 63% AUC performance achieved by our simplest naive Bayes model (Tukey’s test at p=0.05). To our knowledge, this is the first result reported on this task, and our analysis indicates that automatic detection of online terrorist recruitment is a feasible task. We also identify a number of important areas of future work including classifying non-English posts and measuring how recruitment posts and current events change membership numbers over time.",
"title": ""
},
{
"docid": "9b32c1ea81eb8d8eb3675c577cc0e2fc",
"text": "Users' addiction to online social networks is discovered to be highly correlated with their social connections in the networks. Dense social connections can effectively help online social networks retain their active users and improve the social network services. Therefore, it is of great importance to make a good prediction of the social links among users. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. Formally, the social networks which share a number of common users are defined as the \"aligned networks\".With the information transferred from multiple aligned social networks, we can gain a more comprehensive knowledge about the social preferences of users in the pre-specified target network, which will benefit the social link prediction task greatly. However, when transferring the knowledge from other aligned source networks to the target network, there usually exists a shift in information distribution between different networks, namely domain difference. In this paper, we study the social link prediction problem of the target network, which is aligned with multiple social networks concurrently. To accommodate the domain difference issue, we project the features extracted for links from different aligned networks into a shared lower-dimensional feature space. Moreover, users in social networks usually tend to form communities and would only connect to a small number of users. Thus, the target network structure has both the low-rank and sparse properties. We propose a novel optimization framework, SLAMPRED, to combine both these two properties aforementioned of the target network and the information of multiple aligned networks with nice domain adaptations. Since the objective function is a linear combination of convex and concave functions involving nondifferentiable regularizers, we propose a novel optimization method to iteratively solve it. Extensive experiments have been done on real-world aligned social networks, and the experimental results demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "91f718a69532c4193d5e06bf1ea19fd3",
"text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.",
"title": ""
},
{
"docid": "48966a0436405a6656feea3ce17e87c3",
"text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.",
"title": ""
},
{
"docid": "b00311730b7b9b4f79cdd7bde5aa84f6",
"text": "While neural networks demonstrate stronger capabilities in pattern recognition nowadays, they are also becoming larger and deeper. As a result, the effort needed to train a network also increases dramatically. In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained. As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e neural Trojans, into the neural IP. We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective. The input anomaly detection approach is able to detect 99.8% of Trojan triggers although with 12.2% false positive. The re-training approach is able to prevent 94.1% of Trojan triggers from triggering the Trojan although it requires that the neural IP be reconfigurable. In the input preprocessing approach, 90.2% of Trojan triggers are rendered ineffective and no assumption about the neural IP is needed.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
}
] |
scidocsrr
|
5cca4d416eca68d5bbb65d6ef7654e16
|
Fast locality-sensitive hashing
|
[
{
"docid": "7963ceddcf75f2e563ddd9501230a93f",
"text": "Advances in data collection and storage capabilities during the past decades have led to an information overload in most sciences. Researchers working in domains as diverse as engineering, astronomy, biology, remote sensing, economics, and consumer transactions, face larger and larger observations and simulations on a daily basis. Such datasets, in contrast with smaller, more traditional datasets that have been studied extensively in the past, present new challenges in data analysis. Traditional statistical methods break down partly because of the increase in the number of observations, but mostly because of the increase in the number of variables associated with each observation. The dimension of the data is the number of variables that are measured on each observation. High-dimensional datasets present many mathematical challenges as well as some opportunities, and are bound to give rise to new theoretical developments [11]. One of the problems with high-dimensional datasets is that, in many cases, not all the measured variables are “important” for understanding the underlying phenomena of interest. While certain computationally expensive novel methods [4] can construct predictive models with high accuracy from high-dimensional data, it is still of interest in many applications to reduce the dimension of the original data prior to any modeling of the data. In mathematical terms, the problem we investigate can be stated as follows: given the p-dimensional random variable x = (x1, . . . , xp) T , find a lower dimensional representation of it, s = (s1, . . . , sk) T with k ≤ p, that captures the content in the original data, according to some criterion. The components of s are sometimes called the hidden components. Different fields use different names for the p multivariate vectors: the term “variable” is mostly used in statistics, while “feature” and “attribute” are alternatives commonly used in the computer science and machine learning literature. Throughout this paper, we assume that we have n observations, each being a realization of the pdimensional random variable x = (x1, . . . , xp) T with mean E(x) = μ = (μ1, . . . , μp) T and covariance matrix E{(x − μ)(x− μ) } = Σp×p. We denote such an observation matrix by X = {xi,j : 1 ≤ i ≤ p, 1 ≤ j ≤ n}. If μi and σi = √",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "f048f684d71d811ac0a9fbd58a76d580",
"text": "Frequency: The course will be offered annually beginning in the Spring 2011 semester. Points and prerequisites: The course will carry four points. The prerequisites for this course are Economic Principles II (V31.0002) and Calculus I (V63.0121). The lectures will focus mainly on conceptual material and applications. Properties: The course will meet two times each week for one hour and fifteen minutes each. No unusual audiovisual or technological aids will be used. This course serves as an introduction to game theory as the study of incentives and strategic behavior in collective and interdependent decision making. The course will develop the necessary theoretical tools for the study of game theory, while concurrently introducing applications in areas such as bargaining, competition, auction theory and strategic voting. This is a course indicated for any student with interest in learning how to apply game theoretical analysis to a variety of disciplines. The aim of the course is to provide a mostly applied overview of game theoretical concepts and emphasize their use in real world situations. By the end of the course, the student should have developed tools which will allow her/him to formally analyze outcomes in strategic situations. There will be one midterm and one final exam and approximately 8 problem sets for this class. The midterm and final exam scores count for 30%, and 60% respectively, of your course grade. The problem set score will be calculated ignoring your lowest score during the semester and will count for 10% of the final grade.",
"title": ""
},
{
"docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e",
"text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.",
"title": ""
},
{
"docid": "f96bc7911cbabeddc6e6362c48e2fcb1",
"text": "In order to identify vulnerable software components, developers can take software metrics as predictors or use text mining techniques to build vulnerability prediction models. A recent study reported that text mining based models have higher recall than software metrics based models. However, this conclusion was drawn without considering the sizes of individual components which affects the code inspection effort to determine whether a component is vulnerable. In this paper, we investigate the predictive power of these two kinds of prediction models in the context of effort-aware vulnerability prediction. To this end, we use the same data sets, containing 223 vulnerabilities found in three web applications, to build vulnerability prediction models. The experimental results show that: (1) in the context of effort-aware ranking scenario, text mining based models only slightly outperform software metrics based models, (2) in the context of effort-aware classification scenario, text mining based models perform similarly to software metrics based models in most cases, and (3) most of the effect sizes (i.e. the magnitude of the differences) between these two kinds of models are trivial. These results suggest that, from the viewpoint of practical application, software metrics based models are comparable to text mining based models. Therefore, for developers, software metrics based models are practical choices for vulnerability prediction, as the cost to build and apply these models is much lower.",
"title": ""
},
{
"docid": "e7c848d4661bab87e39243834be80046",
"text": "2048 is an engaging single-player nondeterministic video puzzle game, which, thanks to the simple rules and hard-to-master gameplay, has gained massive popularity in recent years. As 2048 can be conveniently embedded into the discrete-state Markov decision processes framework, we treat it as a testbed for evaluating existing and new methods in reinforcement learning. With the aim to develop a strong 2048 playing program, we employ temporal difference learning with systematic n-tuple networks. We show that this basic method can be significantly improved with temporal coherence learning, multi-stage function approximator with weight promotion, carousel shaping, and redundant encoding. In addition, we demonstrate how to take advantage of the characteristics of the n-tuple network, to improve the algorithmic effectiveness of the learning process by delaying the (decayed) update and applying lock-free optimistic parallelism to effortlessly make advantage of multiple CPU cores. This way, we were able to develop the best known 2048 playing program to date, which confirms the effectiveness of the introduced methods for discrete-state Markov decision problems.",
"title": ""
},
{
"docid": "6f176e780d94a8fa8c5b1d6d364c4363",
"text": "Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.",
"title": ""
},
{
"docid": "717988e7bada51ad5c4115f4d43de01a",
"text": "I offer an overview of the rapidly growing field of mindfulness-based interventions (MBIs). A working definition of mindfulness in this context includes the brahma viharas, sampajanna and appamada, and suggests a very particular mental state which is both wholesome and capable of clear and penetrating insight into the nature of reality. The practices in mindfulness-based stress reduction (MBSR) that apply mindfulness to the four foundations are outlined, along with a brief history of the program and the original intentions of the founder, Jon Kabat-Zinn. The growth and scope of these interventions are detailed with demographics provided by the Center for Mindfulness, an overview of salient research studies and a listing of the varied MBIs that have grown out of MBSR. The question of ethics is explored, and other challenges are raised including teacher qualification and clarifying the “outer limits,” or minimum requirements, of what constitutes an MBI. Current trends are explored, including the increasing number of cohort-specific interventions as well as the publication of books, articles, and workbooks by a new generation of MBI teachers. Together, they form an emerging picture of MBIs as their own new “lineage,” which look to MBSR as their inspiration and original source. The potential to bring benefit to new fields, such as government and the military, represent exciting opportunities for MBIs, along with the real potential to transform health care. Sufficient experience in the delivery of MBIs has been garnered to offer the greater contemplative community valuable resources such as secular language, best practices, and extensive research.",
"title": ""
},
{
"docid": "2615de62d2b2fa8a15e79ca2a3a57a3b",
"text": "Recent evidence has shown that entrants into self-employment are disproportionately drawn from the tails of the earnings and ability distributions. This observation is explained by a multi-task model of occupational choice in which frictions in the labor market induces mismatches between firms and workers, and mis-assignment of workers to tasks. The model also yields distinctive predictions relating prior work histories to earnings and to the probability of entry into self-employment. These predictions are tested with the Korean Labor and Income Panel Study, from which we find considerable support for the model.",
"title": ""
},
{
"docid": "33ab76f714ca23bdfddecfe436fd1ee2",
"text": "A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combine that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR. keywords: defeasible reasoning, nonmonotonic logic, perception, causes, causation, time, temporal This work was supported in part by NSF grant no. IRI-9634106. An early version of some of this material appears in Pollock (1996), but it has undergone substantial change in the present paper. projection, frame problem, qualification problem, ramification problem, OSCAR.",
"title": ""
},
{
"docid": "05127dab049ef7608932913f66db0990",
"text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.",
"title": ""
},
{
"docid": "301ce75026839f85bc15100a9a7cc5ca",
"text": "This paper presents a novel visual-inertial integration system for human navigation in free-living environments, where the measurements from wearable inertial and monocular visual sensors are integrated. The preestimated orientation, obtained from magnet, angular rate, and gravity sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman filter is selected as the fusion estimator. Furthermore, the use of preestimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared with the RANdom SAmple Consensus algorithm. In addition, an adaptive-frame rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation, but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.",
"title": ""
},
{
"docid": "9bba22f8f70690bee5536820567546e6",
"text": "Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.",
"title": ""
},
{
"docid": "b67e6d5ee2451912ea6267cbc5274440",
"text": "The paper presents theoretical analyses, simulations and design of a PTAT (proportional to absolute temperature) temperature sensor that is based on the vertical PNP structure and dedicated to CMOS VLSI circuits. Performed considerations take into account specific properties of materials that forms electronic elements. The electrothermal simulations are performed in order to verify the unwanted self-heating effect of the sensor",
"title": ""
},
{
"docid": "c61e25e5896ff588764639b6a4c18d2e",
"text": "Social media is continually emerging as a platform of information exchange around health challenges. We study mental health discourse on the popular social media: reddit. Building on findings about health information seeking and sharing practices in online forums, and social media like Twitter, we address three research challenges. First, we present a characterization of self-disclosure in mental illness communities on reddit. We observe individuals discussing a variety of concerns ranging from the daily grind to specific queries about diagnosis and treatment. Second, we build a statistical model to examine the factors that drive social support on mental health reddit communities. We also develop language models to characterize mental health social support, which are observed to bear emotional, informational, instrumental, and prescriptive information. Finally, we study disinhibition in the light of the dissociative anonymity that reddit’s throwaway accounts provide. Apart from promoting open conversations, such anonymity surprisingly is found to gather feedback that is more involving and emotionally engaging. Our findings reveal, for the first time, the kind of unique information needs that a social media like reddit might be fulfilling when it comes to a stigmatic illness. They also expand our understanding of the role of the social web in behavioral therapy.",
"title": ""
},
{
"docid": "4eeb792ffb70d9ae015e806c85000cd7",
"text": "Optimal instruction scheduling and register allocation are NP-complete problems that require heuristic solutions. By restricting the problem of register allocation and instruction scheduling for delayed-load architectures to expression trees we are able to nd optimal schedules quickly. This thesis presents a fast, optimal code scheduling algorithm for processors with a delayed load of 1 instruction cycle. The algorithm minimizes both execution time and register use and runs in time proportional to the size of the expression tree. In addition, the algorithm is simple; it ts on one page. The dominant paradigm in modern global register allocation is graph coloring. Unlike graph-coloring, our technique, Probabilistic Register Allocation, is unique in its ability to quantify the likelihood that a particular value might actually be allocated a register before allocation actually completes. By computing the likelihood that a value will be assigned a register by a register allocator, register candidates that are competing heavily for scarce registers can be isolated from those that have less competition. Probabilities allow the register allocator to concentrate its e orts where bene t is high and the likelihood of a successful allocation is also high. Probabilistic register allocation also avoids backtracking and complicated live-range splitting heuristics that plague graph-coloring algorithms. ii Optimal algorithms for instruction selection in tree-structured intermediate representations rely on dynamic programming techniques. Bottom-Up Rewrite System (BURS) technology produces extremely fast code generators by doing all possible dynamic programming before code generation. Thus, the dynamic programming process can be very slow. To make BURS technology more attractive, much e ort has gone into reducing the time to produce BURS code generators. Current techniques often require a signi cant amount of time to process a complex machine description (over 10 minutes on a fast workstation). This thesis presents an improved, faster BURS table generation algorithm that makes BURS technology more attractive for instruction selection. The optimized techniques have increased the speed to generate BURS code generators by a factor of 10 to 30. In addition, the algorithms simplify previous techniques, and were implemented in fewer than 2000 lines of C. iii Acknowledgements I have bene ted from the help and support of many people while attending the University of Wisconsin. They deserve my thanks. My mother encouraged me to pursue a PhD, and supported me, in too many ways to list, throughout the process. Professor Charles Fischer, my advisor, generously shared his time, guidance, and ideas with me. Professors Susan Horwitz and James Larus patiently read (and re-read) my thesis. Chris Fraser's zealous quest for small, simple and fast programs was a welcome change from the prevailing trend towards bloated, complex and slow software. Robert Henry explained his early BURS research and made his Codegen system available to me. Lorenz Huelsbergen distracted me with enough creative research ideas to keep graduate school fun. National Science Foundation grant CCR{8908355 provided my nancial support. Some computer resources were obtained through Digital Equipment Corporation External Research Grant 48428. iv",
"title": ""
},
{
"docid": "0a1925251cac8d15da9bbc90627c28dc",
"text": "The Madden–Julian oscillation (MJO) is the dominant mode of tropical atmospheric intraseasonal variability and a primary source of predictability for global sub-seasonal prediction. Understanding the origin and perpetuation of the MJO has eluded scientists for decades. The present paper starts with a brief review of progresses in theoretical studies of the MJO and a discussion of the essential MJO characteristics that a theory should explain. A general theoretical model framework is then described in an attempt to integrate the major existing theoretical models: the frictionally coupled Kelvin–Rossby wave, the moisture mode, the frictionally coupled dynamic moisture mode, the MJO skeleton, and the gravity wave interference, which are shown to be special cases of the general MJO model. The last part of the present paper focuses on a special form of trio-interaction theory in terms of the general model with a simplified Betts–Miller (B-M) cumulus parameterization scheme. This trio-interaction theory extends the Matsuno–Gill theory by incorporating a trio-interaction among convection, moisture, and wave-boundary layer (BL) dynamics. The model is shown to produce robust large-scale characteristics of the observed MJO, including the coupled Kelvin–Rossby wave structure, slow eastward propagation (~5 m/s) over warm pool, the planetary (zonal) scale circulation, the BL low-pressure and moisture convergence preceding major convection, and amplification/decay over warm/cold sea surface temperature (SST) regions. The BL moisture convergence feedback plays a central role in coupling equatorial Kelvin and Rossby waves with convective heating, selecting a preferred eastward propagation, and generating instability. The moisture feedback can enhance Rossby wave component, thereby substantially slowing down eastward propagation. With the trio-interaction theory, a number of fundamental issues of MJO dynamics are addressed: why the MJO possesses a mixed Kelvin–Rossby wave structure and how the Kelvin and Rossby waves, which propagate in opposite directions, could couple together with convection and select eastward propagation; what makes the MJO move eastward slowly in the eastern hemisphere, resulting in the 30–60-day periodicity; why MJO amplifies over the warm pool ocean and decays rapidly across the dateline. Limitation and ramifications of the model results to general circulation modeling of MJO are discussed.",
"title": ""
},
{
"docid": "9f8ff3d7322aefafb99e5cc0dd3b33c2",
"text": "We report on the use of scenario-based methods for evaluating collaborative systems. We describe the method, the case study where it was applied, and provide results of its efficacy in the field. The results suggest that scenario-based evaluation is effective in helping to focus evaluation efforts and in identifying the range of technical, human, organizational and other contextual factors that impact system success. The method also helps identify specific actions, for example, prescriptions for design to enhance system effectiveness. However, we found the method somewhat less useful for identifying the measurable benefits gained from a CSCW implementation, which was one of our primary goals. We discuss challenges faced applying the technique, suggest recommendations for future research, and point to implications for practice.",
"title": ""
},
{
"docid": "fee504e2184570e80956ff1c8a4ec83c",
"text": "The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients.",
"title": ""
},
{
"docid": "555a0c7b435cbafa49ca6b3b365a6d68",
"text": "We propose a joint framework combining speech enhancement (SE) and voice activity detection (VAD) to increase the speech intelligibility in low signal-noise-ratio (SNR) environments. Deep Neural Networks (DNN) have recently been successfully adopted as a regression model in SE. Nonetheless, the performance in harsh environments is not always satisfactory because the noise energy is often dominating in certain speech segments causing speech distortion. Based on the analysis of SNR information at the frame level in the training set, our approach consists of two steps, namely: (1) a DNN-based VAD model is trained to generate frame-level speech/non-speech probabilities; and (2) the final enhanced speech features are obtained by a weighted sum of the estimated clean speech features processed by incorporating VAD information. Experimental results demonstrate that the proposed SE approach effectively improves short-time objective intelligibility (STOI) by 0.161 and perceptual evaluation of speech quality (PESQ) by 0.333 over the already-good SE baseline systems at -5dB SNR of babble noise.",
"title": ""
}
] |
scidocsrr
|
f649e6aff9c45d19a82cf43afa2a6cb6
|
Joint virtual machine and bandwidth allocation in software defined network (SDN) and cloud computing environments
|
[
{
"docid": "7544daa81ddd9001772d48846e3097c3",
"text": "In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.",
"title": ""
}
] |
[
{
"docid": "8cfa2086e1c73bae6945d1a19d52be26",
"text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.",
"title": ""
},
{
"docid": "5e7d5a86a007efd5d31e386c862fef5c",
"text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.",
"title": ""
},
{
"docid": "6720ae7a531d24018bdd1d3d1c7eb28b",
"text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r",
"title": ""
},
{
"docid": "764d6f45cd9dc08963a0e4d21b23d470",
"text": "Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.",
"title": ""
},
{
"docid": "47e06f5c195d2e1ecb6199b99ef1ee2d",
"text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.",
"title": ""
},
{
"docid": "b1d534c6df789c45f636e69480517183",
"text": "Virtual switches are a crucial component of SDN-based cloud systems, enabling the interconnection of virtual machines in a flexible and “software-defined” manner. This paper raises the alarm on the security implications of virtual switches. In particular, we show that virtual switches not only increase the attack surface of the cloud, but virtual switch vulnerabilities can also lead to attacks of much higher impact compared to traditional switches. We present a systematic security analysis and identify four design decisions which introduce vulnerabilities. Our findings motivate us to revisit existing threat models for SDN-based cloud setups, and introduce a new attacker model for SDN-based cloud systems using virtual switches. We demonstrate the practical relevance of our analysis using a case study with Open vSwitch and OpenStack. Employing a fuzzing methodology, we find several exploitable vulnerabilities in Open vSwitch. Using just one vulnerability we were able to create a worm that can compromise hundreds of servers in a matter of minutes. Our findings are applicable beyond virtual switches: NFV and high-performance fast path implementations face similar issues. This paper also studies various mitigation techniques and discusses how to redesign virtual switches for their integration. ∗Also with, Internet Network Architectures, TU Berlin. †Also with, Dept. of Computer Science, Aalborg University. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SOSR’18, March 28-29, 2018, Los Angeles, CA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery. ACM ISBN .... . . $15.00 https://doi.org/...",
"title": ""
},
{
"docid": "fc2a45aa3ec8e4d27b9fc1a86d24b86d",
"text": "Information and Communication Technologies (ICT) rapidly migrate towards the Future Internet (FI) era, which is characterized, among others, by powerful and complex network infrastructures and innovative applications, services and content. An application area that attracts immense research interest is transportation. In particular, traffic congestions, emergencies and accidents reveal inefficiencies in transportation infrastructures, which can be overcome through the exploitation of ICT findings, in designing systems that are targeted at traffic / emergency management, namely Intelligent Transportation Systems (ITS). This paper considers the potential connection of vehicles to form vehicular networks that communicate with each other at an IP-based level, exchange information either directly or indirectly (e.g. through social networking applications and web communities) and contribute to a more efficient and green future world of transportation. In particular, the paper presents the basic research areas that are associated with the concept of Internet of Vehicles (IoV) and outlines the fundamental research challenges that arise there from.",
"title": ""
},
{
"docid": "c62dfcc83ca24450ea1a7e12a17ac93e",
"text": "Lymphedema and lipedema are chronic progressive disorders for which no causal therapy exists so far. Many general practitioners will rarely see these disorders with the consequence that diagnosis is often delayed. The pathophysiological basis is edematization of the tissues. Lymphedema involves an impairment of lymph drainage with resultant fluid build-up. Lipedema arises from an orthostatic predisposition to edema in pathologically increased subcutaneous tissue. Treatment includes complex physical decongestion by manual lymph drainage and absolutely uncompromising compression therapy whether it is by bandage in the intensive phase to reduce edema or with a flat knit compression stocking to maintain volume.",
"title": ""
},
{
"docid": "17d927926f34efbdcb542c15fcf4e442",
"text": "Automated Guided Vehicles (AGVs) are now becoming popular in automated materials handling systems, flexible manufacturing systems and even containers handling applications at seaports. In the past two decades, much research and many papers have been devoted to various aspects of the AGV technology and rapid progress has been witnessed. As one of the enabling technologies, scheduling and routing of AGVs have attracted considerable attention; many algorithms about scheduling and routing of AGVs have been proposed. However, most of the existing results are applicable to systems with small number of AGVs, offering low degree of concurrency. With drastically increased number of AGVs in recent applications (e.g. in the order of a hundred in a container terminal), efficient scheduling and routing algorithms are needed to resolve the increased contention of resources (e.g. path, loading and unloading buffers) among AGVs. Because they often employ regular route topologies, the new applications also demand innovative strategies to increase system performance. This survey paper first gives an account of the emergence of the problems of AGV scheduling and routing. It then differentiates them from several related problems, and surveys and classifies major existing algorithms for the problems. Noting the similarities with known problems in parallel and distributed systems, it suggests to apply analogous ideas in routing and scheduling AGVs. It concludes by pointing out fertile areas for future study.",
"title": ""
},
{
"docid": "4b5ac4095cb2695a1e5282e1afca80a4",
"text": "Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.",
"title": ""
},
{
"docid": "146387ae8853279d21f0b4c2f9b3e400",
"text": "We address a class of manipulation problems where the robot perceives the scene with a depth sensor and can move its end effector in a space with six degrees of freedom – 3D position and orientation. Our approach is to formulate the problem as a Markov decision process (MDP) with abstract yet generally applicable state and action representations. Finding a good solution to the MDP requires adding constraints on the allowed actions. We develop a specific set of constraints called hierarchical SE(3) sampling (HSE3S) which causes the robot to learn a sequence of gazes to focus attention on the task-relevant parts of the scene. We demonstrate the effectiveness of our approach on three challenging pick-place tasks (with novel objects in clutter and nontrivial places) both in simulation and on a real robot, even though all training is done in simulation.",
"title": ""
},
{
"docid": "3c631c249254a24d9343a971a05af74e",
"text": "The selection of the new requirements which should be included in the development of the release of a software product is an important issue for software companies. This problem is known in the literature as the Next Release Problem (NRP). It is an NP-hard problem which simultaneously addresses two apparently contradictory objectives: the total cost of including the selected requirements in the next release of the software package, and the overall satisfaction of a set of customers who have different opinions about the priorities which should be given to the requirements, and also have different levels of importance within the company. Moreover, in the case of managing real instances of the problem, the proposed solutions have to satisfy certain interaction constraints which arise among some requirements. In this paper, the NRP is formulated as a multiobjective optimization problem with two objectives (cost and satisfaction) and three constraints (types of interactions). A multiobjective swarm intelligence metaheuristic is proposed to solve two real instances generated from data provided by experts. Analysis of the results showed that the proposed algorithm can efficiently generate high quality solutions. These were evaluated by comparing them with different proposals (in terms of multiobjective metrics). The results generated by the present approach surpass those generated in other relevant work in the literature (e.g. our technique can obtain a HV of over 60% for the most complex dataset managed, while the other approaches published cannot obtain an HV of more than 40% for the same dataset). 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ce2a19f9f3ee13978845f1ede238e5b2",
"text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.",
"title": ""
},
{
"docid": "7998670588bee1965fd5a18be9ccb0d9",
"text": "In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.",
"title": ""
},
{
"docid": "099a2ee305b703a765ff3579f0e0c1c3",
"text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.",
"title": ""
},
{
"docid": "84dbdf4c145fc8213424f6d51550faa9",
"text": "Because acute cholangitis sometimes rapidly progresses to a severe form accompanied by organ dysfunction, caused by the systemic inflammatory response syndrome (SIRS) and/or sepsis, prompt diagnosis and severity assessment are necessary for appropriate management, including intensive care with organ support and urgent biliary drainage in addition to medical treatment. However, because there have been no standard criteria for the diagnosis and severity assessment of acute cholangitis, practical clinical guidelines have never been established. The aim of this part of the Tokyo Guidelines is to propose new criteria for the diagnosis and severity assessment of acute cholangitis based on a systematic review of the literature and the consensus of experts reached at the International Consensus Meeting held in Tokyo 2006. Acute cholangitis can be diagnosed if the clinical manifestations of Charcot's triad, i.e., fever and/or chills, abdominal pain (right upper quadrant or epigastric), and jaundice are present. When not all of the components of the triad are present, then a definite diagnosis can be made if laboratory data and imaging findings supporting the evidence of inflammation and biliary obstruction are obtained. The severity of acute cholangitis can be classified into three grades, mild (grade I), moderate (grade II), and severe (grade III), on the basis of two clinical factors, the onset of organ dysfunction and the response to the initial medical treatment. \"Severe (grade III)\" acute cholangitis is defined as acute cholangitis accompanied by at least one new-onset organ dysfunction. \"Moderate (grade II)\" acute cholangitis is defined as acute cholangitis that is unaccompanied by organ dysfunction, but that does not respond to the initial medical treatment, with the clinical manifestations and/or laboratory data not improved. \"Mild (grade I)\" acute cholangitis is defined as acute cholangitis that responds to the initial medical treatment, with the clinical findings improved.",
"title": ""
},
{
"docid": "5f54125c0114f4fadc055e721093a49e",
"text": "In this study, a fuzzy logic based autonomous vehicle control system is designed and tested in The Open Racing Car Simulator (TORCS) environment. The aim of this study is that vehicle complete the race without to get any damage and to get out of the way. In this context, an intelligent control system composed of fuzzy logic and conventional control structures has been developed such that the racing car is able to compete the race autonomously. In this proposed structure, once the vehicle's gearshifts have been automated, a fuzzy logic based throttle/brake control system has been designed such that the racing car is capable to accelerate/decelerate in a realistic manner as well as to drive at desired velocity. The steering control problem is also handled to end up with a racing car that is capable to travel on the road even in the presence of sharp curves. In this context, we have designed a fuzzy logic based positioning system that uses the knowledge of the curvature ahead to determine an appropriate position. The game performance of the developed fuzzy logic systems can be observed from https://youtu.be/qOvEz3-PzRo.",
"title": ""
},
{
"docid": "319ba1d449d2b65c5c58b5cc0fdbed67",
"text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness",
"title": ""
},
{
"docid": "98911eead8eb90ca295425917f5cd522",
"text": "We provide strong evidence from multiple tests that credit lines (CLs) play special roles in syndicated loan packages. We find that CLs are associated with lower interest rate spreads on institutional term loans (ITLs) in the same loan packages. CLs also help improve secondary market liquidity of ITLs. These effects are robust to within-firm-year analysis. Using Lehman Brothers bankruptcy as a quasi-natural experiment further confirms our conclusions. These findings support the Bank Specialness Hypothesis that banks play valuable roles in alleviating information problems and that CLs are one conduit for this specialness.",
"title": ""
}
] |
scidocsrr
|
e3aea73581e42c468cb3c5f58d648ad1
|
Reputation and social network analysis in multi-agent systems
|
[
{
"docid": "8e70aea51194dba675d4c3e88ee6b9ad",
"text": "Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.",
"title": ""
}
] |
[
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "a33c723760f9870744ab004b693e8904",
"text": "Portfolio analysis of the publication profile of a unit of interest, ranging from individuals, organizations, to a scientific field or interdisciplinary programs, aims to inform analysts and decision makers about the position of the unit, where it has been, and where it may go in a complex adaptive environment. A portfolio analysis may aim to identify the gap between the current position of an organization and a goal that it intends to achieve or identify competencies of multiple institutions. We introduce a new visual analytic method for analyzing, comparing, and contrasting characteristics of publication portfolios. The new method introduces a novel design of dual-map thematic overlays on global maps of science. Each publication portfolio can be added as one layer of dual-map overlays over two related but distinct global maps of science, one for citing journals and the other for cited journals. We demonstrate how the new design facilitates a portfolio analysis in terms of patterns emerging from the distributions of citation threads and the dynamics of trajectories as a function of space and time. We first demonstrate the analysis of portfolios defined on a single source article. Then we contrast publication portfolios of multiple comparable units of interest, namely, colleges in universities, corporate research organizations. We also include examples of overlays of scientific fields. We expect the new method will provide new insights to portfolio analysis.",
"title": ""
},
{
"docid": "d597d4a1c32256b95524876218d963da",
"text": "E-commerce in today's conditions has the highest dependence on network infrastructure of banking. However, when the possibility of communicating with the Banking network is not provided, business activities will suffer. This paper proposes a new approach of digital wallet based on mobile devices without the need to exchange physical money or communicate with banking network. A digital wallet is a software component that allows a user to make an electronic payment in cash (such as a credit card or a digital coin), and hides the low-level details of executing the payment protocol that is used to make the payment. The main features of proposed architecture are secure awareness, fault tolerance, and infrastructure-less protocol.",
"title": ""
},
{
"docid": "17b85b7a5019248c4e43b4f5edc68ffb",
"text": "We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both onand off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.2",
"title": ""
},
{
"docid": "1a9026e0e8fdcd1fab24661beb9ac400",
"text": "Please check this box if you do not wish your email address to be published Acknowledgments: The authors would like to thank the anonymous reviewers for their valuable comments that have enabled the improvement of manuscript's quality. The authors would also like to acknowledge that the Before that, he served as a Researcher Grade D at the research center CERTH/ITI and at research center NCSR \" Demokritos \". He was also founder and manager of the eGovernment Unit at Archetypon SA, an international IT company. He holds a Diploma in Electrical Engineering from the National Technical University of Athens, Greece, and an MSc and PhD from Brunel University, UK. During the past years he has initiated and managed several research projects (e.g. Automation. He has about 200 research publications in the areas of software modeling and development for the domains of eGovernment, eBusiness, eLearning, eManufacturing etc. Structured Abstract: Purpose The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach The scientific literature was systematically reviewed to identify relevant empirical studies. These studies were analyzed and synthesized in the form of a proposed conceptual framework, which was thereafter applied to further analyze this literature, hence gaining new insights into the field. Findings The proposed framework reveals that all relevant studies can be decomposed into a small number of steps, and different approaches can be followed in each step. The application of the framework resulted in interesting findings. For example, most studies support SM predictive power, however more than one-third of these studies infer predictive power without employing predictive analytics. In addition, analysis suggests that there is a clear need for more advanced sentiment analysis methods as well as methods for identifying search terms for collection and filtering of raw SM data. Value The proposed framework enables researchers to classify and evaluate existing studies, to design scientifically rigorous new studies, and to identify the field's weaknesses, hence proposing future research directions. Purpose: The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach: The scientific literature was systematically reviewed …",
"title": ""
},
{
"docid": "f6669d0b53dd0ca789219874d35bf14e",
"text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.",
"title": ""
},
{
"docid": "28f1b7635b777cf278cc8d53a5afafb9",
"text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.",
"title": ""
},
{
"docid": "cf6816d0a38296a3dc2c04894a102283",
"text": "This paper presents a high-efficiency positive buck- boost converter with mode-select circuits and feed-forward techniques. Four power transistors produce more conduction and more switching losses when the positive buck-boost converter operates in buck-boost mode. Utilizing the mode-select circuit, the proposed converter can decrease the loss of switches and let the positive buck-boost converter operate in buck, buck-boost, or boost mode. By adding feed-forward techniques, the proposed converter can improve transient response when the supply voltages are changed. The proposed converter has been fabricated with TSMC 0.35-μm CMOS 2P4M processes. The total chip area is 2.59 × 2.74 mm2 (with PADs), the output voltage is 3.3 V, and the regulated supply voltage range is from 2.5-5 V. Its switching frequency is 500 kHz and the maximum power efficiency is 91.6% as the load current equals 150 mA.",
"title": ""
},
{
"docid": "0f4ac688367d3ea43643472b7d75ffc9",
"text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.",
"title": ""
},
{
"docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d",
"text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.",
"title": ""
},
{
"docid": "d00691959822087a1bddc3b411d27239",
"text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.",
"title": ""
},
{
"docid": "704df193801e9cd282c0ce2f8a72916b",
"text": "We present our preliminary work in developing augmented reali ty systems to improve methods for the construction, inspection, and renovatio n of architectural structures. Augmented reality systems add virtual computer-generated mate rial to the surrounding physical world. Our augmented reality systems use see-through headworn displays to overlay graphics and sounds on a person’s naturally occurring vision and hearing. As the person moves about, the position and orientation of his or her head is tracked, allowing the overlaid material to remai n tied to the physical world. We describe an experimental augmented reality system tha t shows the location of columns behind a finished wall, the location of re-bar s inside one of the columns, and a structural analysis of the column. We also discuss our pre liminary work in developing an augmented reality system for improving the constructio n of spaceframes. Potential uses of more advanced augmented reality systems are presented.",
"title": ""
},
{
"docid": "c8daa2571cd7808664d3dbe775cf60ab",
"text": "OBJECTIVE\nTo review the research addressing the relationship of childhood trauma to psychosis and schizophrenia, and to discuss the theoretical and clinical implications.\n\n\nMETHOD\nRelevant studies and previous review papers were identified via computer literature searches.\n\n\nRESULTS\nSymptoms considered indicative of psychosis and schizophrenia, particularly hallucinations, are at least as strongly related to childhood abuse and neglect as many other mental health problems. Recent large-scale general population studies indicate the relationship is a causal one, with a dose-effect.\n\n\nCONCLUSION\nSeveral psychological and biological mechanisms by which childhood trauma increases risk for psychosis merit attention. Integration of these different levels of analysis may stimulate a more genuinely integrated bio-psycho-social model of psychosis than currently prevails. Clinical implications include the need for staff training in asking about abuse and the need to offer appropriate psychosocial treatments to patients who have been abused or neglected as children. Prevention issues are also identified.",
"title": ""
},
{
"docid": "5752868bb14f434ce281733f2ecf84f8",
"text": "Tessellation in fundus is not only a visible feature for aged-related and myopic maculopathy but also confuse retinal vessel segmentation. The detection of tessellated images is an inevitable processing in retinal image analysis. In this work, we propose a model using convolutional neural network for detecting tessellated images. The input to the model is pre-processed fundus image, and the output indicate whether this photograph has tessellation or not. A database with 12,000 colour retinal images is collected to evaluate the classification performance. The best tessellation classifier achieves accuracy of 97.73% and AUC value of 0.9659 using pretrained GoogLeNet and transfer learning technique.",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "6868e3b2432d9914a9b4a4fd2b50b3ee",
"text": "Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions.",
"title": ""
},
{
"docid": "20f43c14feaf2da1e8999403bf350855",
"text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a03a67b3442ef08fe378976377e76f76",
"text": "The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent.",
"title": ""
},
{
"docid": "4584a3a2b0e1cb30ba1976bd564d74b9",
"text": "Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.",
"title": ""
},
{
"docid": "f1a0ea0829f44b3ec235074521dc55c3",
"text": "CONTEXT\nWithout detailed evidence of their effectiveness, pedometers have recently become popular as a tool for motivating physical activity.\n\n\nOBJECTIVE\nTo evaluate the association of pedometer use with physical activity and health outcomes among outpatient adults.\n\n\nDATA SOURCES\nEnglish-language articles from MEDLINE, EMBASE, Sport Discus, PsychINFO, Cochrane Library, Thompson Scientific (formerly known as Thompson ISI), and ERIC (1966-2007); bibliographies of retrieved articles; and conference proceedings.\n\n\nSTUDY SELECTION\nStudies were eligible for inclusion if they reported an assessment of pedometer use among adult outpatients, reported a change in steps per day, and included more than 5 participants.\n\n\nDATA EXTRACTION AND DATA SYNTHESIS\nTwo investigators independently abstracted data about the intervention; participants; number of steps per day; and presence or absence of obesity, diabetes, hypertension, or hyperlipidemia. Data were pooled using random-effects calculations, and meta-regression was performed.\n\n\nRESULTS\nOur searches identified 2246 citations; 26 studies with a total of 2767 participants met inclusion criteria (8 randomized controlled trials [RCTs] and 18 observational studies). The participants' mean (SD) age was 49 (9) years and 85% were women. The mean intervention duration was 18 weeks. In the RCTs, pedometer users significantly increased their physical activity by 2491 steps per day more than control participants (95% confidence interval [CI], 1098-3885 steps per day, P < .001). Among the observational studies, pedometer users significantly increased their physical activity by 2183 steps per day over baseline (95% CI, 1571-2796 steps per day, P < .0001). Overall, pedometer users increased their physical activity by 26.9% over baseline. An important predictor of increased physical activity was having a step goal such as 10,000 steps per day (P = .001). When data from all studies were combined, pedometer users significantly decreased their body mass index by 0.38 (95% CI, 0.05-0.72; P = .03). This decrease was associated with older age (P = .001) and having a step goal (P = .04). Intervention participants significantly decreased their systolic blood pressure by 3.8 mm Hg (95% CI, 1.7-5.9 mm Hg, P < .001). This decrease was associated with greater baseline systolic blood pressure (P = .009) and change in steps per day (P = .08).\n\n\nCONCLUSIONS\nThe results suggest that the use of a pedometer is associated with significant increases in physical activity and significant decreases in body mass index and blood pressure. Whether these changes are durable over the long term is undetermined.",
"title": ""
}
] |
scidocsrr
|
e78085305a6078d0f412ce3784ef2718
|
Post-Quantum Cryptography on FPGA Based on Isogenies on Elliptic Curves
|
[
{
"docid": "4dcc069e33f2831c7ccdd719c51607e1",
"text": "We survey the progress that has been made on the arithmetic of elliptic curves in the past twenty-five years, with particular attention to the questions highlighted in Tate’s 1974 Inventiones paper.",
"title": ""
}
] |
[
{
"docid": "6399b2d75c6051d284594d327b2ad17a",
"text": "System design and evaluation methodologies receive significant attention in natural language processing (NLP), with the systems typically being evaluated on a common task and against shared data sets. This enables direct system comparison and facilitates progress in the field. However, computational work on metaphor is considerably more fragmented than similar research efforts in other areas of NLP and semantics. Recent years have seen a growing interest in computational modeling of metaphor, with many new statistical techniques opening routes for improving system accuracy and robustness. However, the lack of a common task definition, shared data set, and evaluation strategy makes the methods hard to compare, and thus hampers our progress as a community in this area. The goal of this article is to review the system features and evaluation strategies that have been proposed for the metaphor processing task, and to analyze their benefits and downsides, with the aim of identifying the desired properties of metaphor processing systems and a set of requirements for their evaluation.",
"title": ""
},
{
"docid": "72f6f6484499ccaa0188d2a795daa74c",
"text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.",
"title": ""
},
{
"docid": "98218545bf3474b46857d828e1b86004",
"text": "Blockchain-based smart contracts are considered a promising technology for handling financial agreements securely. In order to realize this vision, we need a formal language to unambiguously describe contract clauses. We introduce Findel – a purely declarative financial domain-specific language (DSL) well suited for implementation in blockchain networks. We implement an Ethereum smart contract that acts as a marketplace for Findel contracts and measure the cost of its operation. We analyze challenges in modeling financial agreements in decentralized networks and outline directions for future work.",
"title": ""
},
{
"docid": "a2575a6a0516db2e47aab0388c5e9677",
"text": "Isaac Miller and Mark Campbell Sibley School of Mechanical and Aerospace Engineering Dan Huttenlocher and Frank-Robert Kline Computer Science Department Aaron Nathan, Sergei Lupashin, and Jason Catlin School of Electrical and Computer Engineering Brian Schimpf School of Operations Research and Information Engineering Pete Moran, Noah Zych, Ephrahim Garcia, Mike Kurdziel, and Hikaru Fujishima Sibley School of Mechanical and Aerospace Engineering Cornell University Ithaca, New York 14853 e-mail: itm2@cornell.edu, mc288@cornell.edu, dph@cs.cornell.edu, amn32@cornell.edu, fk36@cornell.edu, pfm24@cornell.edu, ncz2@cornell.edu, bws22@cornell.edu, sv15@cornell.edu, eg84@cornell.edu, jac267@cornell.edu, msk244@cornell.edu, hf86@cornell.edu",
"title": ""
},
{
"docid": "deccbb39b92e01611de6d0749f550726",
"text": "As product prices become increasingly available on the World Wide Web, consumers attempt to understand how corporations vary these prices over time. However, corporations change prices based on proprietary algorithms and hidden variables (e.g., the number of unsold seats on a flight). Is it possible to develop data mining techniques that will enable consumers to predict price changes under these conditions?This paper reports on a pilot study in the domain of airline ticket prices where we recorded over 12,000 price observations over a 41 day period. When trained on this data, Hamlet --- our multi-strategy data mining algorithm --- generated a predictive model that saved 341 simulated passengers $198,074 by advising them when to buy and when to postpone ticket purchases. Remarkably, a clairvoyant algorithm with complete knowledge of future prices could save at most $320,572 in our simulation, thus HAMLET's savings were 61.8% of optimal. The algorithm's savings of $198,074 represents an average savings of 23.8% for the 341 passengers for whom savings are possible. Overall, HAMLET saved 4.4% of the ticket price averaged over the entire set of 4,488 simulated passengers. Our pilot study suggests that mining of price data available over the web has the potential to save consumers substantial sums of money per annum.",
"title": ""
},
{
"docid": "5f5cf5235c10fe84e39e6725705a9940",
"text": "A fully automatic method for descreening halftone images is presented based on convolutional neural networks with end-to-end learning. Incorporating context level information, the proposed method not only removes halftone artifacts but also synthesizes the fine details lost during halftone. The method consists of two main stages. In the first stage, intrinsic features of the scene are extracted, the low-frequency reconstruction of the image is estimated, and halftone patterns are removed. For the intrinsic features, the edges and object-categories are estimated and fed to the next stage as strong visual and contextual cues. In the second stage, fine details are synthesized on top of the low-frequency output based on an adversarial generative model. In addition, the novel problem of rescreening is addressed, where a natural input image is halftoned so as to be similar to a separately given reference halftone image. To this end, a two-stage convolutional neural network is also presented. Both networks are trained with millions of before-and-after example image pairs of various halftone styles. Qualitative and quantitative evaluations are provided, which demonstrates the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "78976c627fb72db5393837169060a92a",
"text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.",
"title": ""
},
{
"docid": "23987d01051f470e26666d6db340018b",
"text": "This paper presents a device that is able to reproduce atmospheric discharges in a small scale. First, there was simulated an impulse generator circuit that could meet the main characteristics of the common lightning strokes waveform. Later, four different generator circuits were developed with the selection made by a microcontroller. Finally, the output was subject to amplification circuits that increased its amplitude. The impulses generated had a very similar shape compared to the real atmospheric discharges to the international standards for impulse testing. The apparatus is meant for application in electric grounding systems and for tests in high frequency to measure the soil impedance.",
"title": ""
},
{
"docid": "43100f1c6563b4af125c1c6040daa437",
"text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: linliang@ieee.org). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian",
"title": ""
},
{
"docid": "c82c28a44adb4a67e44e1d680b1d13ad",
"text": "Cipherbase is a comprehensive database system that provides strong end-to-end data confidentiality through encryption. Cipherbase is based on a novel architecture that combines an industrial strength database engine (SQL Server) with lightweight processing over encrypted data that is performed in secure hardware. The overall architecture provides significant benefits over the state-of-the-art in terms of security, performance, and functionality. This paper presents a prototype of Cipherbase that uses FPGAs to provide secure processing and describes the system engineering details implemented to achieve competitive performance for transactional workloads. This includes hardware-software co-design issues (e.g. how to best offer parallelism), optimizations to hide the latency between the secure hardware and the main system, and techniques to cope with space inefficiencies. All these optimizations were carefully designed not to affect end-to-end data confidentiality. Our experiments with the TPC-C benchmark show that in the worst case when all data are strongly encrypted, Cipherbase achieves 40% of the throughput of plaintext SQL Server. In more realistic cases, if only critical data such as customer names are encrypted, the Cipherbase throughput is more than 90% of plaintext SQL Server.",
"title": ""
},
{
"docid": "6dd5e223a54b9f812031ecff80d39445",
"text": "In modern smart grid networks, the traditional power grid is enabled by the technological advances in sensing, measurement, and control devices with two-way communications between the suppliers and customers. The smart grid integration helps the power grid networks to be smarter, but it also increases the risk of adversaries because of the currently obsoleted cyber-infrastructure. Adversaries can easily paralyzes the power facility by misleading the energy management system with injecting false data. In this paper, we proposes a defense strategy to the malicious data injection attack for smart grid state estimation at the control center. The proposed “adaptive CUSUM algorithm”, is recursive in nature, and each recursion comprises two inter-leaved stages: Stage 1 introduces the linear unknown parameter solver technique, and Stage 2 applies the multi-thread CUSUM algorithm for quickest change detection. The proposed scheme is able to determine the possible existence of adversary at the control center as quickly as possible without violating the given constraints such as a certain level of detection accuracy and false alarm. The performance of the proposed algorithm is evaluated by both mathematic analysis and numerical simulation.",
"title": ""
},
{
"docid": "60a6c8588c46fa2aa63a3348723f2bb1",
"text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d2e2c25a32dc54219336c886b79b2ef",
"text": "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. It has been observed that the presence of opportunistic users post unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts). A method of mining YouTube to classify a video as spam or legitimate based on video attributes has been presented. The empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it to recognize video response spam. Keywords— video spam; spam detection; YouTube; TubeKit",
"title": ""
},
{
"docid": "7f52960fb76c3c697ef66ffee91b13ee",
"text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.",
"title": ""
},
{
"docid": "5546ec134b205144fed46a585db447b4",
"text": "Historically, the control of wound infection depended on antiseptic and aseptic techniques directed at coping with the infecting organism. In the 19th century and the early part of the 20th century, wound infections had devastating consequences and a measurable mortality. Even in the 1960s, before the correct use of antibiotics and the advent of modern preoperative and postoperative care, as much as one quarter of a surgical ward might have been occupied by patients with wound complications. As a result, wound management, in itself, became an important component of ward care and of medical education. It is fortunate that many factors have intervened so that the so-called wound rounds have become a practice of the past.The epidemiology of wound infection has changed as surgeons have learned to control bacteria and the inoculum as well as to focus increasingly on the patient (the host) for measures that will continue to provide improved results. The following three factors are the determinants of any infectious process:",
"title": ""
},
{
"docid": "24615e8513ce50d229b64eecaa5af8c8",
"text": "Driver's gaze direction is a critical information in understanding driver state. In this paper, we present a distributed camera framework to estimate driver's coarse gaze direction using both head and eye cues. Coarse gaze direction is often sufficient in a number of applications, however, the challenge is to estimate gaze direction robustly in naturalistic real-world driving. Towards this end, we propose gaze-surrogate features estimated from eye region via eyelid and iris analysis. We present a novel iris detection computational framework. We are able to extract proposed features robustly and determine driver's gaze zone effectively. We evaluated the proposed system on a dataset, collected from naturalistic on-road driving in urban streets and freeways. A human expert annotated driver's gaze zone ground truth using information from the driver's eyes and the surrounding context. We conducted two experiments to compare the performance of the gaze zone estimation with and without eye cues. The head-alone experiment has a reasonably good result for most of the gaze zones with an overall 79.8% of weighted accuracy. By adding eye cues, the experimental result shows that the overall weighted accuracy is boosted to 94.9%, and all the individual gaze zones have a better true detection rate especially between the adjacent zones. Therefore, our experimental evaluations show efficacy of the proposed features and very promising results for robust gaze zone estimation.",
"title": ""
},
{
"docid": "8a074cfc00239c3987c8d80480c7a2f6",
"text": "The paper presents a novel approach for extracting structural features from segmented cursive handwriting. The proposed approach is based on the contour code and stroke direct ion. The contour code feature utilises the rate of change of slope along the c ontour profile in addition to other properties such as the ascender and descender count, start point and e d point. The direction feature identifies individual line segments or strokes from the character’s outer boundary or thinned representation and highlights each character's pertine nt d rection information. Each feature is investigated employing a benchmark da tabase and the experimental results using the proposed contour code based structural fea ture are very promising. A comparative evaluation with the directional feature a nd existing transition feature is included.",
"title": ""
},
{
"docid": "1daaadeb6cfc16143788b51943deff79",
"text": "sonSQL is a MySQL variant that aims to be the default database system for social network data. It uses a conceptual schema called sonSchema to translate a social network design into logical tables. This paper introduces sonSchema, shows how it can be instantiated, and illustrates social network analysis for sonSchema datasets. Experiments show such SQL-based analysis brings insight into community evolution, cluster discovery and action propagation.",
"title": ""
},
{
"docid": "49e148ddb4c5798c157e8568c10fae3d",
"text": "Aesthetic quality estimation of an image is a challenging task. In this paper, we introduce a deep CNN approach to tackle this problem. We adopt the sate-of-the-art object-recognition CNN as our baseline model, and adapt it for handling several high-level attributes. The networks capable of dealing with these high-level concepts are then fused by a learned logical connector for predicting the aesthetic rating. Results on the standard benchmark shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
}
] |
scidocsrr
|
829e1c0a7f1869c51e60d946326bf49f
|
Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks
|
[
{
"docid": "350c899dbd0d9ded745b70b6f5e97d19",
"text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.",
"title": ""
}
] |
[
{
"docid": "fcb69bd97835da9f244841d54996f070",
"text": "A conventional transverse slot substrate integrated waveguide (SIW) periodic leaky wave antenna (LWA) provides a fan beam, usually E-plane beam having narrow beam width and H-plane having wider beamwidth. The main beam direction changes with frequency sweep. In the applications requiring a pencil beam, an array of the antenna is generally used to decrease the H-plane beam width which requires long and tiring optimization steps. In this paper, it is shown that the H-plane beamwidth can be easily decreased by using two baffles with a conventional leaky wave antenna. A prototype periodic leaky wave antenna with baffles is designed and fabricated for X-band applications. The E- and H-plane 3 dB beam widths of the antenna at 10.5GHz are, respectively, 6° and 22°. Over the frequency range 8.2–14 GHz, the antenna scans from θ = −60° to θ = 15°, from backward to forward direction. The use of baffles also improves the gain of the antenna including broadside direction by approximately 4 dB.",
"title": ""
},
{
"docid": "2800046ff82a5bc43b42c1d2e2dc6777",
"text": "We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.",
"title": ""
},
{
"docid": "e0c7387ae9602d3de30695a27f35c16f",
"text": "Nanoscale membrane assemblies of sphingolipids, cholesterol, and certain proteins, also known as lipid rafts, play a crucial role in facilitating a broad range of important cell functions. Whereas on living cell membranes lipid rafts have been postulated to have nanoscopic dimensions and to be highly transient, the existence of a similar type of dynamic nanodomains in multicomponent lipid bilayers has been questioned. Here, we perform fluorescence correlation spectroscopy on planar plasmonic antenna arrays with different nanogap sizes to assess the dynamic nanoscale organization of mimetic biological membranes. Our approach takes advantage of the highly enhanced and confined excitation light provided by the nanoantennas together with their outstanding planarity to investigate membrane regions as small as 10 nm in size with microsecond time resolution. Our diffusion data are consistent with the coexistence of transient nanoscopic domains in both the liquid-ordered and the liquid-disordered microscopic phases of multicomponent lipid bilayers. These nanodomains have characteristic residence times between 30 and 150 μs and sizes around 10 nm, as inferred from the diffusion data. Thus, although microscale phase separation occurs on mimetic membranes, nanoscopic domains also coexist, suggesting that these transient assemblies might be similar to those occurring in living cells, which in the absence of raft-stabilizing proteins are poised to be short-lived. Importantly, our work underscores the high potential of photonic nanoantennas to interrogate the nanoscale heterogeneity of native biological membranes with ultrahigh spatiotemporal resolution.",
"title": ""
},
{
"docid": "c215a497d39f4f95a9fc720debb14b05",
"text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).",
"title": ""
},
{
"docid": "8017a70c73f6758b685648054201342a",
"text": "Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in comprehensive multi-class experiments using the publicly available datasets Caltech-256 and Image Net. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.",
"title": ""
},
{
"docid": "983ec9cdd75d0860c96f89f3c9b2f752",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "17b68f3275ce077e6c4e9f4c0006c43c",
"text": "A compact folded dipole antenna for millimeter-wave (MMW) energy harvesting is proposed in this paper. The antenna consists of two folded arms excited by a coplanar stripline (CPS). A coplanar waveguide (CPW) to coplanar stripline (CPS) transformer is introduced for wide band operation. The antenna radiates from 33 GHz to 41 GHz with fractional bandwidth about 21.6%. The proposed antenna shows good radiation characteristics and low VSWR, lower than 2, as well as average antenna gain is around 5 dBi over the whole frequency range. The proposed dipole antenna shows about 49% length reduction. The simulated results using both Ansoft HFSS and CST Microwave Studio show a very good agreement between them.",
"title": ""
},
{
"docid": "32b4b275dc355dff2e3e168fe6355772",
"text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "fb9bbfc3e301cb669663a12d1f18a11f",
"text": "In extensively modified landscapes, how the matrix is managed determines many conservation outcomes. Recent publications revise popular conceptions of a homogeneous and static matrix, yet we still lack an adequate conceptual model of the matrix. Here, we identify three core effects that influence patch-dependent species, through impacts associated with movement and dispersal, resource availability, and the abiotic environment. These core effects are modified by five 'dimensions': spatial and temporal variation in matrix quality; spatial scale; temporal scale of matrix variation; and adaptation. The conceptual domain of the matrix, defined as three core effects and their interaction with these five dimensions, provides a much-needed framework to underpin management of fragmented landscapes and highlights new research priorities.",
"title": ""
},
{
"docid": "ca9fb43322aae64da6ce7de83b7ed5ed",
"text": "We use a combination of lab and field evidence to study whether preferences for immediacy and the tendency to procrastinate are connected as in O’Donoghue and Rabin (1999a). To measure immediacy, we have participants choose between smaller-sooner and larger-later rewards. Both rewards are paid by check to control for transaction costs. To measure procrastination, we record how fast participants cash their checks and complete other tasks. We find that individuals with a preference for immediacy are more likely to procrastinate. We also find evidence that individuals differ in the degree to which they anticipate their own procrastination. First version: December 2007 JEL Codes: D01, D03, D90",
"title": ""
},
{
"docid": "dabe8a7bff4a9d3ba910744804579b74",
"text": "Charitable giving is influenced by many social, psychological, and economic factors. One common way to encourage individuals to donate to charities is by offering to match their contribution (often by their employer or by the government). Conitzer and Sandholm introduced the idea of using auctions to allow individuals to offer to match the contribution of others. We explore this idea in a social network setting, where individuals care about the contribution of their neighbors, and are allowed to specify contributions that are conditional on the contribution of their neighbors.\n We give a mechanism for this setting that raises the largest individually rational contributions given the conditional bids, and analyze the equilibria of this mechanism in the case of linear utilities. We show that if the social network is strongly connected, the mechanism always has an equilibrium that raises the maximum total contribution (which is the contribution computed according to the true utilities); in other words, the price of stability of the game defined by this mechanism is one. Interestingly, although the mechanism is not dominant strategy truthful (and in fact, truthful reporting need not even be a Nash equilibrium of this game), this result shows that the mechanism always has a full-information equilibrium which achieves the same outcome as in the truthful scenario. Of course, there exist cases where the maximum total contribution even with true utilities is zero: we show that the existence of non-zero equilibria can be characterized exactly in terms of the largest eigenvalue of the utility matrix associated with the social network.",
"title": ""
},
{
"docid": "1927d9f2010bb8c49d6511c9d3dac2f0",
"text": "To determine the relationships among plasma ghrelin and leptin concentrations and hypothalamic ghrelin contents, and sleep, cortical brain temperature (Tcrt), and feeding, we determined these parameters in rats in three experimental conditions: in free-feeding rats with normal diurnal rhythms, in rats with feeding restricted to the 12-h light period (RF), and in rats subjected to 5-h of sleep deprivation (SD) at the beginning of the light cycle. Plasma ghrelin and leptin displayed diurnal rhythms with the ghrelin peak preceding and the leptin peak following the major daily feeding peak in hour 1 after dark onset. RF reversed the diurnal rhythm of these hormones and the rhythm of rapid-eye-movement sleep (REMS) and significantly altered the rhythm of Tcrt. In contrast, the duration and intensity of non-REMS (NREMS) were hardly responsive to RF. SD failed to change leptin concentrations, but it promptly stimulated plasma ghrelin and induced eating. SD elicited biphasic variations in the hypothalamic ghrelin contents. SD increased plasma corticosterone, but corticosterone did not seem to influence either leptin or ghrelin. The results suggest a strong relationship between feeding and the diurnal rhythm of leptin and that feeding also fundamentally modulates the diurnal rhythm of ghrelin. The variations in hypothalamic ghrelin contents might be associated with sleep-wake activity in rats, but, unlike the previous observations in humans, obvious links could not be detected between sleep and the diurnal rhythms of plasma concentrations of either ghrelin or leptin in the rat.",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "e0682efd9c8807411da832b796b47da2",
"text": "The rise of cloud computing is radically changing the way enterprises manage their information technology (IT) assets. Considering the benefits of cloud computing to the information technology sector, we present a review of current research initiatives and applications of the cloud computing paradigm related to product design and manufacturing. In particular, we focus on exploring the potential of utilizing cloud computing for selected aspects of collaborative design, distributed manufacturing, collective innovation, data mining, semantic web technology, and virtualization. In addition, we propose to expand the paradigm of cloud computing to the field of computer-aided design and manufacturing and propose a new concept of cloud-based design and manufacturing (CBDM). Specifically, we (1) propose a comprehensive definition of CBDM; (2) discuss its key characteristics; (3) relate current research in design and manufacture to CBDM; and (4) identify key research issues and future trends. 1",
"title": ""
},
{
"docid": "6a282fbc6ee9baea673c2f9f15955a18",
"text": "A 34-year-old woman suffered from significant chronic pain, depression, non-restorative sleep, chronic fatigue, severe morning stiffness, leg cramps, irritable bowel syndrome, hypersensitivity to cold, concentration difficulties, and forgetfulness. Blood tests were negative for rheumatic disorders. The patient was diagnosed with Fibromyalgia syndrome (FMS). Due to the lack of effectiveness of pharmacological therapies in FMS, she approached a novel metabolic proposal for the symptomatic remission. Its core idea is supporting serotonin synthesis by allowing a proper absorption of tryptophan assumed with food, while avoiding, or at least minimizing the presence of interfering non-absorbed molecules, such as fructose and sorbitol. Such a strategy resulted in a rapid improvement of symptoms after only few days on diet, up to the remission of most symptoms in 2 months. Depression, widespread chronic pain, chronic fatigue, non-restorative sleep, morning stiffness, and the majority of the comorbidities remitted. Energy and vitality were recovered by the patient as prior to the onset of the disease, reverting the occupational and social disabilities. The patient episodically challenged herself breaking the dietary protocol leading to its negative test and to the evaluation of its benefit. These breaks correlated with the recurrence of the symptoms, supporting the correctness of the biochemical hypothesis underlying the diet design toward remission of symptoms, but not as a final cure. We propose this as a low risk and accessible therapeutic protocol for the symptomatic remission in FMS with virtually no costs other than those related to vitamin and mineral salt supplements in case of deficiencies. A pilot study is required to further ground this metabolic approach, and to finally evaluate its inclusion in the guidelines for clinical management of FMS.",
"title": ""
},
{
"docid": "9493b44f845bb7d37bf68a96a8ff96f6",
"text": "This paper focuses on services and applications provided to mobile users using airborne computing infrastructure. We present concepts such as drones-as-a-service and fl yin,fly-out infrastructure, and note data management and sys tem design issues that arise in these scenarios. Issues of Big Da ta arising from such applications, optimising the configuration of airborne and ground infrastructure to provide the best QoS and QoE, situation-awareness, scalability, reliability,scheduling for efficiency, interaction with users and drones using phys ical annotations are outlined.",
"title": ""
},
{
"docid": "a2082f1b4154cd11e94eff18a016e91e",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "02770bf28a64851bf773c56736efa537",
"text": "Wearable robotics is strongly oriented to humans. New applications for wearable robots are encouraged by the lightness and portability of new devices and the progress in human-robot cooperation strategies. In this paper, we propose the different design guidelines to realize a robotic extra-finger for human grasping enhancement. Such guidelines were followed for the realization of three prototypes obtained using rapid prototyping techniques, i.e., a 3D printer and an open hardware development platform. Both fully actuated and under-actuated solutions have been explored. In the proposed wearable design, the robotic extra-finger can be worn as a bracelet in its rest position. The availability of a supplementary finger in the human hand allows to enlarge its workspace, improving grasping and manipulation capabilities. This preliminary work is a first step towards the development of robotic extra-limbs able to increase human workspace and dexterity.",
"title": ""
},
{
"docid": "9186f1998d2c836fb1f9b95fd9122911",
"text": "We introduce inScent, a wearable olfactory display that can be worn in mobile everyday situations and allows the user to receive personal scented notifications, i.e. scentifications. Olfaction, i.e. the sense of smell, is used by humans as a sensorial information channel as an element for experiencing the environment. Olfactory sensations are closely linked to emotions and memories, but also notify about personal dangers such as fire or foulness. We want to utilize the properties of smell as a notification channel by amplifying received mobile notifications with artificially emitted scents. We built a wearable olfactory display that can be worn as a pendant around the neck and contains up to eight different scent aromas that can be inserted and quickly exchanged via small scent cartridges. Upon emission, scent aroma is vaporized and blown towards the user. A hardware - and software framework is presented that allows developers to add scents to their mobile applications. In a qualitative user study, participants wore the inScent wearable in public. We used subsequent semi-structured interviews and grounded theory to build a common understanding of the experience and derived lessons learned for the use of scentifications in mobile situations.",
"title": ""
}
] |
scidocsrr
|
2a24868023e3e3792cb16af7531021fb
|
Aesthetics of Interaction Design: A Literature Review
|
[
{
"docid": "bc892fe2a369f701e0338085eaa0bdbd",
"text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.",
"title": ""
}
] |
[
{
"docid": "da6a74341c8b12658aea2a267b7a0389",
"text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE",
"title": ""
},
{
"docid": "67b2b896af777731615ac010f688bb9c",
"text": "Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer’s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.",
"title": ""
},
{
"docid": "238adc0417c167aeb64c23b576f434d0",
"text": "This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.",
"title": ""
},
{
"docid": "71467b5ba3ef8706dc8eea80ca7d0d4e",
"text": "The DiscovEHR collaboration between the Regeneron Genetics Center and Geisinger Health System couples high-throughput sequencing to an integrated health care system using longitudinal electronic health records (EHRs). We sequenced the exomes of 50,726 adult participants in the DiscovEHR study to identify ~4.2 million rare single-nucleotide variants and insertion/deletion events, of which ~176,000 are predicted to result in a loss of gene function. Linking these data to EHR-derived clinical phenotypes, we find clinical associations supporting therapeutic targets, including genes encoding drug targets for lipid lowering, and identify previously unidentified rare alleles associated with lipid levels and other blood level traits. About 3.5% of individuals harbor deleterious variants in 76 clinically actionable genes. The DiscovEHR data set provides a blueprint for large-scale precision medicine initiatives and genomics-guided therapeutic discovery.",
"title": ""
},
{
"docid": "0f6183057c6b61cefe90e4fa048ab47f",
"text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.",
"title": ""
},
{
"docid": "2a225a33dc4d8cd08d0ae4a18d8b267c",
"text": "Support Vector Machines is a powerful methodology for solving problems in nonlinear classification, function estimation and density estimation which has also led recently to many new developments in kernel based learning in general. In these methods one solves convex optimization problems, typically quadratic programs. We focus on Least Squares Support Vector Machines which are reformulations to standard SVMs that lead to solving linear KKT systems. Least squares support vector machines are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primaldual interpretations from optimization theory. In view of interior point algorithms such LS-SVM KKT systems can be considered as a core problem. Where needed the obtained solutions can be robustified and/or sparsified. As an alternative to a top-down choice of the cost function, methods from robust statistics are employed in a bottom-up fashion for further improving the estimates. We explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. The framework is further extended towards unsupervised learning by considering PCA analysis and its kernel version as a one-class modelling problem. This leads to new primal-dual support vector machine formulations for kernel PCA and kernel canonical correlation analysis. Furthermore, LS-SVM formulations are mentioned towards recurrent networks and control, thereby extending the methods from static to dynamic problems. In general, support vector machines may pose heavy computational challenges for large data sets. For this purpose, we propose a method of Fixed Size LS-SVM where the estimation is done in the primal space in relation to a Nyström sampling with active selection of support vectors and we discuss extensions to committee networks. The methods will be illustrated by several benchmark and real-life applications.",
"title": ""
},
{
"docid": "7a3b5ab64e9ef5cd0f0b89391bb8bee2",
"text": "Quality enhancement of humanitarian assistance is far from a technical task. It is interwoven with debates on politics of principles and people are intensely committed to the various outcomes these debates might have. It is a field of strongly competing truths, each with their own rationale and appeal. The last few years have seen a rapid increase in discussions, policy paper and organisational initiatives regarding the quality of humanitarian assistance. This paper takes stock of the present initiatives and of the questions raised with regard to the quality of humanitarian assistance.",
"title": ""
},
{
"docid": "c0f958c7bb692f8a405901796445605a",
"text": "Thickening is the first step in the design of sustainable (cost effective, environmentally friendly, and socially viable) tailings management solutions for surface deposition, mine backfilling, and sub-aqueous discharge. The high water content slurries are converted to materials with superior dewatering properties by adding long-chain synthetic polymers. Given the solid and liquid composition of a slurry, a high settling rate alongside a high solids content can be achieved by optimizing the various polymers parameters: ionic type (T), charge density (C), molecular weight (M), and dosage (D). This paper developed a statistical model to predict field performance of a selected metal mine slurry using laboratory test data. Results of sedimentationconsolidation tests were fitted using the method of least squares. A newly devised polymer characteristic coefficient (Cp) that combined the various polymer parameters correlated well with the observed dewatering behavior as the R equalled 0.95 for void ratio and 0.84 for hydraulic conductivity. The various combinations of polymer parameters resulted in variable slurry performance during sedimentation and were found to converge during consolidation. Further, the void ratio-effective stress and the hydraulic conductivity-void ratio relationships were found to be e = a σ′ b and k = 10 (c + e , respectively.",
"title": ""
},
{
"docid": "bebd0ea7946bbe44335b951c9c917d0b",
"text": "Increasing hospital re-admission rates due to Hospital Acquired Infections (HAIs) are a concern at many healthcare facilities. To prevent the spread of HAIs, caregivers should comply with hand hygiene guidelines, which require reliable and timely hand hygiene compliance monitoring systems. The current standard practice of monitoring compliance involves the direct observation of caregivers' hand cleaning as they enter or exit a patient room by a trained observer, which can be time-consuming, resource-intensive, and subject to bias. To alleviate tedious manual effort and reduce errors, this paper describes how we applied machine learning to study the characteristics of compliance that can later be used to (1) assist direct observation by deciding when and where to station manual auditors and (2) improve compliance by providing just-in-time alerts or recommending training materials to non-compliant staff. The paper analyzes location and handwashing station activation data from a 30-bed intensive care unit study and uses machine learning to assess if location, time-based factors, or other behavior data can determine what characteristics are predictive of handwashing non-compliance events. The results of this study show that a care provider's entry compliance is highly indicative of the same provider's exit compliance. Moreover, compliance of the most recent patient room visit can also predict entry compliance of a provider's current patient room visit.",
"title": ""
},
{
"docid": "932dc0c02047cd701e41530c42d830bc",
"text": "The concept of \"extra-cortical organization of higher mental functions\" proposed by Lev Vygotsky and expanded by Alexander Luria extends cultural-historical psychology regarding the interplay of natural and cultural factors in the development of the human mind. Using the example of self-regulation, the authors explore the evolution of this idea from its origins to recent findings on the neuropsychological trajectories of the development of executive functions. Empirical data derived from the Tools of the Mind project are used to discuss the idea of using classroom intervention to study the development of self-regulation in early childhood.",
"title": ""
},
{
"docid": "de298bb631dd0ca515c161b6e6426a85",
"text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.",
"title": ""
},
{
"docid": "d63946a096b9e8a99be6d5ddfe4097da",
"text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.",
"title": ""
},
{
"docid": "244116ffa1ed424fc8519eedc7062277",
"text": "This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.",
"title": ""
},
{
"docid": "314e10ba42a13a84b40a1b0367bd556e",
"text": "How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional \"tone\" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.",
"title": ""
},
{
"docid": "aa1ce09a8ad407ce413d9e56e13e79d4",
"text": "A boost-flyback converter was investigated for its ability to charge separate battery stacks from a low-voltage high-current renewable energy source. A low voltage (12V) battery was connected in the boost configuration, and a high voltage (330V) battery stack was connected in the flyback configuration. This converter works extremely well for this application because it gives charging priority to the low voltage battery and dumps the reserve energy to the high voltage stack. As the low-voltage battery approaches full charge, more power is adaptively directed to the high-voltage stack, until finally the charging of the low voltage battery stops. A two-secondary flyback is also capable of this adaptive charging, but the boost-flyback does it with much higher conversion efficiency, and with a simpler (less expensive) transformer design.",
"title": ""
},
{
"docid": "a7cc7076d324f33d5e9b40756c5e1631",
"text": "Social learning analytics introduces tools and methods that help improving the learning process by providing useful information about the actors and their activity in the learning system. This study examines the relation between SNA parameters and student outcomes, between network parameters and global course performance, and it shows how visualizations of social learning analytics can help observing the visible and invisible interactions occurring in online distance education. The findings from our empirical study show that future research should further investigate whether there are conditions under which social network parameters are reliable predictors of academic performance, but also advises against relying exclusively in social network parameters for predictive purposes. The findings also show that data visualization is a useful tool for social learning analytics, and how it may provide additional information about actors and their behaviors for decision making in online distance",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "d1aa525575e33c587d86e89566c21a49",
"text": "This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.",
"title": ""
},
{
"docid": "73a2b8479bb57d4e94a7fc629ee4528a",
"text": "OBJECTIVES\nQuantitative olfactory assessment is often neglected in clinical practice, although olfactory loss can assist to diagnosis and may lead to significant morbidity. \"Sniffin' Sticks\" is a modern test of nasal chemosensory performance that is based on penlike odor-dispensing devices. It consists of three tests of olfactory function: odor threshold, odor discrimination, and odor identification. The results of this test may be presented as a composite threshold-discrimination-identification (TDI) score. The aim of this study was first to develop normative data of olfactory function for the Greek population using this test and second to relate olfactory performance to age, sex, and side examined.\n\n\nSTUDY DESIGN\nThe authors conducted a prospective clinical trial.\n\n\nMETHODS\nA total of 93 healthy subjects were included in the study, 48 males and 45 females, mean age of 44.5 years (range, 6-84 years).\n\n\nRESULTS\nA database of normal values for olfactory testing was established for the Greek population. Females performed better than males and older subjects performed less efficiently in all tests. We also found a right nostril advantage compared with the left. Additionally, scores obtained from bilateral presentation were similar with scores obtained from the nostril with the better performance.\n\n\nCONCLUSIONS\nThe \"Sniffin' Sticks\" can be used effectively in the Greek population to evaluate olfactory performance. Mean values of olfactory tests obtained were better in comparison with data from settings located in central and northern Europe.",
"title": ""
},
{
"docid": "14baf30e1bdf7e31082fc2f1be8ea01c",
"text": "Different concentrations (3, 30, 300, and 3000 mg/L of culture fluid) of garlic oil (GAR), diallyl sulfide (DAS), diallyl disulfide (DAD), allicin (ALL), and allyl mercaptan (ALM) were incubated for 24 h in diluted ruminal fluid with a 50:50 forage:concentrate diet (17.7% crude protein; 30.7% neutral detergent fiber) to evaluate their effects on rumen microbial fermentation. Garlic oil (30 and 300 mg/L), DAD (30 and 300 mg/L), and ALM (300 mg/L) resulted in lower molar proportion of acetate and higher proportions of propionate and butyrate. In contrast, at 300 mg/L, DAS only increased the proportion of butyrate, and ALL had no effects on volatile fatty acid proportions. In a dual-flow continuous culture of rumen fluid fed the same 50:50 forage:concentrate diet, addition of GAR (312 mg/L), DAD (31.2 and 312 mg/L), and ALM (31.2 and 312 mg/L) resulted in similar changes to those observed in batch culture, with the exception of the lack of effect of DAD on the proportion of propionate. In a third in vitro study, the potential of GAR (300 mg/L), DAD (300 mg/L), and ALM (300 mg/L) to decrease methane production was evaluated. Treatments GAR, DAD, and ALM resulted in a decrease in methane production of 73.6, 68.5, and 19.5%, respectively, compared with the control. These results confirm the ability of GAR, DAD, and ALM to decrease methane production, which may help to improve the efficiency of energy use in the rumen.",
"title": ""
}
] |
scidocsrr
|
81057324736ea87689acdea7bd8296cf
|
Crowd-sourcing NLG Data: Pictures Elicit Better Data
|
[
{
"docid": "675a865e7335b2c9bd0cccf1317a5d27",
"text": "The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based \"crowd-sourcing\" models of production. Here we investigate the effect of compensation on performance in the context of two experiments, conducted on Amazon's Mechanical Turk (AMT). We find that increased financial incentives increase the quantity, but not the quality, of work performed by participants, where the difference appears to be due to an \"anchoring\" effect: workers who were paid more also perceived the value of their work to be greater, and thus were no more motivated than workers paid less. In contrast with compensation levels, we find the details of the compensation scheme do matter--specifically, a \"quota\" system results in better work for less pay than an equivalent \"piece rate\" system. Although counterintuitive, these findings are consistent with previous laboratory studies, and may have real-world analogs as well.",
"title": ""
},
{
"docid": "ca0f2b3565b6479c5c3b883325bf3296",
"text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains—Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.",
"title": ""
}
] |
[
{
"docid": "c47525f2456de0b9b87a5ebbb5a972fb",
"text": "This article reviews the potential use of visual feedback, focusing on mirror visual feedback, introduced over 15 years ago, for the treatment of many chronic neurological disorders that have long been regarded as intractable such as phantom pain, hemiparesis from stroke and complex regional pain syndrome. Apart from its clinical importance, mirror visual feedback paves the way for a paradigm shift in the way we approach neurological disorders. Instead of resulting entirely from irreversible damage to specialized brain modules, some of them may arise from short-term functional shifts that are potentially reversible. If so, relatively simple therapies can be devised--of which mirror visual feedback is an example--to restore function.",
"title": ""
},
{
"docid": "ced8cc9329777cc01cdb3e91772a29c2",
"text": "Manually annotating clinical document corpora to generate reference standards for Natural Language Processing (NLP) systems or Machine Learning (ML) is a timeconsuming and labor-intensive endeavor. Although a variety of open source annotation tools currently exist, there is a clear opportunity to develop new tools and assess functionalities that introduce efficiencies into the process of generating reference standards. These features include: management of document corpora and batch assignment, integration of machine-assisted verification functions, semi-automated curation of annotated information, and support of machine-assisted pre-annotation. The goals of reducing annotator workload and improving the quality of reference standards are important considerations for development of new tools. An infrastructure is also needed that will support largescale but secure annotation of sensitive clinical data as well as crowdsourcing which has proven successful for a variety of annotation tasks. We introduce the Extensible Human Oracle Suite of Tools (eHOST) http://code.google.com/p/ehost that provides such functionalities that when coupled with server integration offer an end-to-end solution to carry out small or large scale as well as crowd sourced annotation projects.",
"title": ""
},
{
"docid": "5b84008df77e2ff8929cd759ae92de7d",
"text": "Purpose – Organizations invest in enterprise systems (ESs) with an expectation to share digital information from disparate sources to improve organizational effectiveness. This study aims to examine how organizations realize digital business strategies using an ES. It does so by evaluating the ES data support activities for knowledge creation, particularly how ES data are transformed into corporate knowledge in relevance to business strategies sought. Further, how this knowledge leads to realization of the business benefits. The linkage between establishing digital business strategy, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Design/methodology/approach – This study develops and utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the role of digital business strategies in achieving benefits using an ES. Semi-structured interviews are first conducted with ES vendors, consultants and IT research firms to understand the process of ES data transformation for realizing business strategies from their perspective. This is followed by three in-depth cases (two large and one medium-sized organization) who have implemented ESs. The empirical data are analyzed using the condensation approach. This method condenses the data into multiple groups according to pre-defined categories, which follow the scope of the research questions. Findings – The key findings emphasize that strategic benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes factors such as digital business strategy deployment, people and process management, and skills and competency development. Although many companies are mature with their ES implementation, these firms have only recently started aligning their ES capabilities with digital business strategies correlating data, decisions, and actions to maximize business value from their ES investment. Research limitations/implications – The findings reflect the views of two large and one mediumsized organization in the manufacturing sector. Although the evidence of the benefit realization process success and its results is more prominent in larger organizations than medium-sized, it may not be generalized that smaller firms cannot achieve these results. Exploration of these aspects in smaller firms or a different industry sector such as retail/service would be of value. Practical implications – The paper highlights the importance of tools and practices for accessing relevant information through an integrated ES so that competent decisions can be established towards achieving digital business strategies, and optimizing organizational performance. Knowledge is a key factor in this process. Originality/value – The paper evaluates a holistic framework for utilization of ES data in realizing digital business strategies. Thus, it develops an enhanced transformational cycle model for ES data transformation into knowledge and results, which maintains to build up the transformational process success in the long term.",
"title": ""
},
{
"docid": "a479d5f8313bc7aabe8154071706fb40",
"text": "Test-Driven Development (TDD) [Beck 2002] is one of the most referenced, yet least used agile practices in industry. Its neglect is due mostly to our lack of understanding of its effects on people, processes, and products. Although most people agree that writing a test case before code promotes more robust implementation and a better design, the unknown costs associated with TDD’s effects and the inversion of the ubiquitous programmer “code-then-test” paradigm has impeded TDD’s adoption.",
"title": ""
},
{
"docid": "2b3929da96949056bc473e8da947cebe",
"text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.",
"title": ""
},
{
"docid": "d40e565a2ed22af998ae60f670210f57",
"text": "Research on human infants has begun to shed light on early-develpping processes for segmenting perceptual arrays into objects. Infants appear to perceive objects by analyzing three-dimensional surface arrangements and motions. Their perception does not accord with a general tendency to maximize figural goodness or to attend-to nonaccidental geometric relations in visual arrays. Object perception does accord with principles governing the motions of material bodies: Infants divide perceptual arrays into units that move as connected wholes, that move separately from one another, that tend to maintain their size and shape over motion, and that tend to act upon each other only on contact. These findings suggest that o general representation of object unity and boundaries is interposed between representations of surfaces and representations of obiects of familiar kinds. The processes that construct this representation may be related to processes of physical reasoning. This article is animated by two proposals about perception and perceptual development. One proposal is substantive: In situations where perception develops through experience, but without instruction or deliberate reflection , development tends to enrich perceptual abilities but not to change them fundamentally. The second proposal is methodological: In the above situations , studies of the origins and early development of perception can shed light on perception in its mature state. These proposals will arise from a discussion of the early development of one perceptual ability: the ability to organize arrays of surfaces into unitary, bounded, and persisting objects. PERCEIVING OBJECTS In recent years, my colleagues and I have been studying young infants' perception of objects in complex displays in which objects are adjacent to other objects, objects are partly hidden behind other objects, of objects move fully",
"title": ""
},
{
"docid": "b96a3320940344dea37f5deccf0e16b2",
"text": "This paper proposes a modulated hysteretic current control (MHCC) technique to improve the transient response of a DC-DC boost converter, which suffers from low bandwidth due to the existence of the right-half-plane (RHP) zero. The MHCC technique can automatically adjust the on-time value to rapidly increase the inductor current, as well as to shorten the transient response time. In addition, based on the characteristic of the RHP zero, the compensation poles and zero are deliberately adjusted to achieve fast transient response in case of load transient condition and adequate phase margin in steady state. Experimental results show the improvement of transient recovery time over 7.2 times in the load transient response compared with the conventional boost converter design when the load current changes from light to heavy or vice versa. The power consumption overhead is merely 1%.",
"title": ""
},
{
"docid": "2cbae69bfb5d1379383cd1cf3e1237ef",
"text": "TerraSAR-X, the first civil German synthetic aperture radar (SAR) satellite has been successfully launched in 2007, June 15th. After 4.5 days the first processed image has been obtained. The overall quality of the image was outstanding, however, suspicious features could be identified which showed precipitation related signatures. These rain-cell signatures motivated a further in-depth study of the physical background of the related propagation effects. During the commissioning phase, a total of 12000 scenes have been investigated for potential propagation effects and about 100 scenes have revealed atmospheric effects to a visible extent. An interesting case of a data acquisition over New York will be presented which shows typical rain-cell signatures and the SAR image will be compared with weather-radar data acquired nearly simultaneously (within the same minute). Furthermore, in this contribution we discuss the influence of the atmosphere (troposphere) on the external calibration (XCAL) of TerraSAR-X. By acquiring simultaneous weather-radar data over the test-site and the SAR-acquisition it was possibleto improve the absolute calibration constant by 0.15 dB.",
"title": ""
},
{
"docid": "2d0c28d1c23ecee1f1a08be11a49aaa2",
"text": "Dictionary learning has became an increasingly important task in machine learning, as it is fundamental to the representation problem. A number of emerging techniques specifically include a codebook learning step, in which a critical knowledge abstraction process is carried out. Existing approaches in dictionary (codebook) learning are either generative (unsupervised e.g. k-means) or discriminative (supervised e.g. extremely randomized forests). In this paper, we propose a multiple instance learning (MIL) strategy (along the line of weakly supervised learning) for dictionary learning. Each code is represented by a classifier, such as a linear SVM, which naturally performs metric fusion for multi-channel features. We design a formulation to simultaneously learn mixtures of codes by maximizing classification margins in MIL. State-of-the-art results are observed in image classification benchmarks based on the learned codebooks, which observe both compactness and effectiveness.",
"title": ""
},
{
"docid": "2d6225b20cf13d2974ce78877642a2f7",
"text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.",
"title": ""
},
{
"docid": "fac5744d86f96344fe1ad9c06e354a81",
"text": "Biocontrol fungi (BCF) are agents that control plant diseases. These include the well-known Trichoderma spp. and the recently described Sebacinales spp. They have the ability to control numerous foliar, root, and fruit pathogens and even invertebrates such as nematodes. However, this is only a subset of their abilities. We now know that they also have the ability to ameliorate a wide range of abiotic stresses, and some of them can also alleviate physiological stresses such as seed aging. They can also enhance nutrient uptake in plants and can substantially increase nitrogen use efficiency in crops. These abilities may be more important to agriculture than disease control. Some strains also have abilities to improve photosynthetic efficiency and probably respiratory activities of plants. All of these capabilities are a consequence of their abilities to reprogram plant gene expression, probably through activation of a limited number of general plant pathways.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "58331d0d42452d615b5a20da473ef5e2",
"text": "This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of “history of word” to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the “history of word” concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.",
"title": ""
},
{
"docid": "c736258623c7f977ebc00f5555d13e02",
"text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.",
"title": ""
},
{
"docid": "56ced0e34c82f085eeba595753d423d1",
"text": "The correctness of software is affected by its constant changes. For that reason, developers use change-impact analysis to identify early the potential consequences of changing their software. Dynamic impact analysis is a practical technique that identifies potential impacts of changes for representative executions. However, it is unknown how reliable its results are because their accuracy has not been studied. This paper presents the first comprehensive study of the predictive accuracy of dynamic impact analysis in two complementary ways. First, we use massive numbers of random changes across numerous Java applications to cover all possible change locations. Then, we study more than 100 changes from software repositories, which are representative of developer practices. Our experimental approach uses sensitivity analysis and execution differencing to systematically measure the precision and recall of dynamic impact analysis with respect to the actual impacts observed for these changes. Our results for both types of changes show that the most cost-effective dynamic impact analysis known is surprisingly inaccurate with an average precision of 38-50% and average recall of 50-56% in most cases. This comprehensive study offers insights on the effectiveness of existing dynamic impact analyses and motivates the future development of more accurate impact analyses.",
"title": ""
},
{
"docid": "ea3dfb0ea22c01b670a7b11f21aa06f2",
"text": "One of the classical goals of research in artificial intelligence is to construct systems that automatically recover the meaning of natural language text. Machine learning methods hold significant potential for addressing many of the challenges involved with these systems. This thesis presents new techniques for learning to map sentences to logical form — lambda-calculus representations of their meanings. We first describe an approach to the context-independent learning problem, where sentences are analyzed in isolation. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a Combinatory Categorial Grammar (CCG) for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. Next, we present an extension that addresses challenges that arise when learning to analyze spontaneous, unedited natural language input, as is commonly seen in natural language interface applications. A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar — for example allowing flexible word order, or insertion of lexical items — with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Finally, we describe how to extend this learning approach to the context-dependent analysis setting, where the meaning of a sentence can depend on the context in which it appears. The training examples are sequences of sentences annotated with lambdacalculus meaning representations. We develop an algorithm that maintains explicit, lambda-calculus representations of discourse entities and uses a context-dependent analysis pipeline to recover logical forms. The method uses a hidden-variable variant of the perception algorithm to learn a linear model used to select the best analysis. Experiments demonstrate that the learning techniques we develop induce accurate models for semantic analysis while requiring less data annotate effort than previous approaches. Thesis Supervisor: Michael Collins Title: Associate Professor",
"title": ""
},
{
"docid": "3fa16d5e442bc4a2398ba746d6aaddfe",
"text": "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.",
"title": ""
},
{
"docid": "0e94af8b40bfac3d2ebb1dfced65eadc",
"text": "SimPy is a Python-based, interpreted simulation tool that offers the power and convenience of Python. It is able to launch processes and sub-processes using generators, which act autonomously and may interact using interrupts. SimPy offers other advantages over competing commercial codes in that it allows for modular development, use of a version control system such as CVS, can be made self-documenting with PyDoc, and is completely extensible. The convenience of an interpreted language, however, is offset for large models by slower than desired run times. This disadvantage can be compensated for by parallelizing the system using PyMPI, from the Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.",
"title": ""
},
{
"docid": "65af21566422d9f0a11f07d43d7ead13",
"text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.",
"title": ""
}
] |
scidocsrr
|
9ed7b32594457fb2694f1f96731a15bd
|
Switched flux permanent magnet machines — Innovation continues
|
[
{
"docid": "a2b60ffe1ed8f8bd79363f4c5cff364b",
"text": "The flux-switching permanent-magnet (FSPM) machine is a relatively novel brushless machine having magnets and concentrated windings in the stator instead of rotor, which exhibits inherently sinusoidal back-EMF waveform and high torque capability. However, due to the high airgap flux density produced by magnets and the salient poles in both stator and rotor, the resultant torque ripple is relatively large, which is unfavorable for high performance drive system. In this paper, instead of conventional optimization on the machine itself, a new torque ripple suppression approach is proposed in which a series of specific harmonic currents are added into q-axis reference current, resulting in additional torque components to counteract the fundamental and second-order harmonic components of cogging torque. Both the simulation and experimental results confirm that the proposed approach can effectively suppress the torque ripple. It should be emphasized that this method is applicable to all PM machines having relatively large cogging torque.",
"title": ""
}
] |
[
{
"docid": "cb5d0498db49c8421fef279aea69c367",
"text": "The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side. We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.",
"title": ""
},
{
"docid": "d12a47e1b72532a3c2c028620eba44d6",
"text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.",
"title": ""
},
{
"docid": "cd82eb636078b633060a857a4eb2b47b",
"text": "The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.",
"title": ""
},
{
"docid": "c059d43c51ec35ec7949b0a10d718b6f",
"text": "The problem of signal recovery from its Fourier transform magnitude is of paramount importance in various fields of engineering and has been around for more than 100 years. Due to the absence of phase information, some form of additional information is required in order to be able to uniquely identify the signal of interest. In this paper, we focus our attention on discrete-time sparse signals (of length <inline-formula><tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>). We first show that if the discrete Fourier transform dimension is greater than or equal to <inline-formula><tex-math notation=\"LaTeX\">$2n$</tex-math></inline-formula>, then almost all signals with <italic> aperiodic</italic> support can be uniquely identified by their Fourier transform magnitude (up to time shift, conjugate flip, and global phase). Then, we develop an efficient two-stage sparse-phase retrieval algorithm (TSPR), which involves: identifying the support, i.e., the locations of the nonzero components, of the signal using a combinatorial algorithm; and identifying the signal values in the support using a convex algorithm. We show that TSPR can <italic> provably</italic> recover most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/2-{\\epsilon }})$</tex-math> </inline-formula>-sparse signals (up to a time shift, conjugate flip, and global phase). We also show that, for most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/4-{\\epsilon }})$</tex-math></inline-formula>-sparse signals, the recovery is <italic>robust</italic> in the presence of measurement noise. These recovery guarantees are asymptotic in nature. Numerical experiments complement our theoretical analysis and verify the effectiveness of TSPR.",
"title": ""
},
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "65d84bb6907a34f8bc8c4b3d46706e53",
"text": "This study analyzes the correlation between video game usage and academic performance. Scholastic Aptitude Test (SAT) and grade-point average (GPA) scores were used to gauge academic performance. The amount of time a student spends playing video games has a negative correlation with students' GPA and SAT scores. As video game usage increases, GPA and SAT scores decrease. A chi-squared analysis found a p value for video game usage and GPA was greater than a 95% confidence level (0.005 < p < 0.01). This finding suggests that dependence exists. SAT score and video game usage also returned a p value that was significant (0.01 < p < 0.05). Chi-squared results were not significant when comparing time spent studying and an individual's SAT score. This research suggests that video games may have a detrimental effect on an individual's GPA and possibly on SAT scores. Although these results show statistical dependence, proving cause and effect remains difficult, since SAT scores represent a single test on a given day. The effects of video games maybe be cumulative; however, drawing a conclusion is difficult because SAT scores represent a measure of general knowledge. GPA versus video games is more reliable because both involve a continuous measurement of engaged activity and performance. The connection remains difficult because of the complex nature of student life and academic performance. Also, video game usage may simply be a function of specific personality types and characteristics.",
"title": ""
},
{
"docid": "8020c67dd790bcff7aea0e103ea672f1",
"text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth",
"title": ""
},
{
"docid": "0ca476ed89607680399604b39d76185b",
"text": "Honeybee swarms and complex brains show many parallels in how they make decisions. In both, separate populations of units (bees or neurons) integrate noisy evidence for alternatives, and, when one population exceeds a threshold, the alternative it represents is chosen. We show that a key feature of a brain--cross inhibition between the evidence-accumulating populations--also exists in a swarm as it chooses its nesting site. Nest-site scouts send inhibitory stop signals to other scouts producing waggle dances, causing them to cease dancing, and each scout targets scouts' reporting sites other than her own. An analytic model shows that cross inhibition between populations of scout bees increases the reliability of swarm decision-making by solving the problem of deadlock over equal sites.",
"title": ""
},
{
"docid": "bd820eea00766190675cd3e8b89477f2",
"text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.",
"title": ""
},
{
"docid": "e4dd72a52d4961f8d4d8ee9b5b40d821",
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "1eecc45f35f693cddc2b4fe972493396",
"text": "In this paper, we reformulate the conventional 2-D Frangi vesselness measure into a pre-weighted neural network (“Frangi-Net”), and illustrate that the Frangi-Net is equivalent to the original Frangi filter. Furthermore, we show that, as a neural network, Frangi-Net is trainable. We evaluate the proposed method on a set of 45 high resolution fundus images. After fine-tuning, we observe both qualitative and quantitative improvements in the segmentation quality compared to the original Frangi measure, with an increase up to 17% in F1 score.",
"title": ""
},
{
"docid": "e471e41553bf7c229a38f3d226ff8a28",
"text": "Large AC machines are sometimes fed by multiple inverters. This paper presents the complete steady-state analysis of the PM synchronous machine with multiplex windings, suitable for driving by multiple independent inverters. Machines with 4, 6 and 9 phases are covered in detail. Particular attention is given to the magnetic interactions not only between individual phases, but between channels or groups of phases. This is of interest not only for determining performance and designing control systems, but also for analysing fault tolerance. It is shown how to calculate the necessary self- and mutual inductances and how to reduce them to a compact dq-axis model without loss of detail.",
"title": ""
},
{
"docid": "d48430f65d844c92661d3eb389cdb2f2",
"text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.",
"title": ""
},
{
"docid": "186c2180e7b681a350126225cd15ece0",
"text": "Two lactose-fermenting Salmonella typhi strains were isolated from bile and blood specimens of a typhoid fever patient who underwent a cholecystectomy due to cholelithiasis. One lactose-fermenting S. typhi strain was also isolated from a pus specimen which was obtained at the tip of the T-shaped tube withdrawn from the operative wound of the common bile duct of the patient. These three lactose-fermenting isolates: GIFU 11924 from bile, GIFU 11926 from pus, and GIFU 11927 from blood, were phenotypically identical to the type strain (GIFU 11801 = ATCC 19430 = NCTC 8385) of S. typhi, except that the three strains fermented lactose and failed to blacken the butt of Kligler iron agar or triple sugar iron agar medium. All three lactose-fermenting strains were resistant to chloramphenicol, ampicillin, sulfomethoxazole, trimethoprim, gentamicin, cephaloridine, and four other antimicrobial agents. The type strain was uniformly susceptible to these 10 drugs. The strain GIFU 11925, a lactose-negative dissociant from strain GIFU 11926, was also susceptible to these drugs, with the sole exception of chloramphenicol (minimal inhibitory concentration, 100 micrograms/ml).",
"title": ""
},
{
"docid": "5cfc2b3a740d0434cf0b3c2812bd6e7a",
"text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some a logical approach to discrete math references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?",
"title": ""
},
{
"docid": "5ff7a82ec704c8fb5c1aa975aec0507c",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "cb561e56e60ba0e5eef2034158c544c2",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
},
{
"docid": "3fd551696803695056dd759d8f172779",
"text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.",
"title": ""
},
{
"docid": "9f34152d5dd13619d889b9f6e3dfd5c3",
"text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@massey.ac.nz. A theory for eLearning",
"title": ""
}
] |
scidocsrr
|
02f515ef921a0680dee230afc579ab4c
|
Small-Size LTE/WWAN Tablet Device Antenna With Two Hybrid Feeds
|
[
{
"docid": "8e84a474e5b7f6451a6073a3e68b1c34",
"text": "A small-size tablet device antenna with three wide operating bands to cover the Long Term Evolution/Wireless Wide Area Network (LTE/WWAN) operation in the 698 ~ 960-, 1710 ~ 2690-, and 3400 ~ 3800-MHz bands is presented. The antenna has a planar structure and is easy to fabricate on one surface of a thin FR4 substrate of size 10×45×0.8 mm3. The antenna is formed by adding a first branch (branch 1, an inductively coupled strip) and a second branch (branch 2, a simple branch strip) to a coupled-fed shorted strip antenna (main portion), and the two branches are configured with the main portion to achieve a compact antenna structure. The three widebands are easy to adjust and can cover the LTE/WWAN operation, which includes the most commonly commercial LTE bands and WWAN bands (698 ~ 960 and 1710 ~ 2690 MHz) and the LTE 3.5-GHz band (3400 ~ 3800 MHz).",
"title": ""
}
] |
[
{
"docid": "d4fc45837d85f3a03fa4bd76b45921a1",
"text": "The importance of the road infrastructure for the society could be compared with importance of blood vessels for humans. To ensure road surface quality it should be monitored continuously and repaired as necessary. The optimal distribution of resources for road repairs is possible providing the availability of comprehensive and objective real time data about the state of the roads. Participatory sensing is a promising approach for such data collection. The paper is describing a mobile sensing system for road irregularity detection using Android OS based smart-phones. Selected data processing algorithms are discussed and their evaluation presented with true positive rate as high as 90% using real world data. The optimal parameters for the algorithms are determined as well as recommendations for their application.",
"title": ""
},
{
"docid": "b5372d4cad87aab69356ebd72aed0e0b",
"text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.",
"title": ""
},
{
"docid": "72f6f6484499ccaa0188d2a795daa74c",
"text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.",
"title": ""
},
{
"docid": "7c287295e022480314d8a2627cd12cef",
"text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.",
"title": ""
},
{
"docid": "e82459841d697a538f3ab77817ed45e7",
"text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.",
"title": ""
},
{
"docid": "dd8194c7f8e28e55fbc45f0d71336112",
"text": "Followers' identification with the leader and the organizational unit, dependence on the leader, and empowerment by the leader are often attributed to transformational leadership in organizations. However, these hypothesized outcomes have received very little attention in empirical studies. Using a sample of 888 bank employees working under 76 branch manages, the authors tested the relationships between transformational leadership and these outcomes. They found that transformational leadership was positively related to both followers' dependence and their empowerment and that personal identification mediated the relationship between transformational leadership and followers' dependence on the leader, whereas social identification mediated the relationship between transformational leadership and followers' empowerment. The authors discuss the implications of these findings to both theory and practice.",
"title": ""
},
{
"docid": "659818e97cd3329d603097c122541815",
"text": "A large-scale content analysis of characters in video games was employed to answer questions about their representations of gender, race and age in comparison to the US population. The sample included 150 games from a year across nine platforms, with the results weighted according to game sales. This innovation enabled the results to be analyzed in proportion to the games that were actually played by the public, and thus allowed the first statements able to be generalized about the content of popular video games. The results show a systematic over-representation of males, white and adults and a systematic under-representation of females, Hispanics, Native Americans, children and the elderly. Overall, the results are similar to those found in television research. The implications for identity, cognitive models, cultivation and game research are discussed. new media & society Copyright © 2009 SAGE Publications Los Angeles, London, New Delhi, Singapore and Washington DC Vol 11(5): 815–834 [DOI: 10.1177/1461444809105354]",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "15205e074804764a6df0bdb7186c0d8c",
"text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.",
"title": ""
},
{
"docid": "365cadf5f980e7c99cc3c2416ca36ba1",
"text": "Epidemiologic studies from numerous disparate populations reveal that individuals with the habit of daily moderate wine consumption enjoy significant reductions in all-cause and particularly cardiovascular mortality when compared with individuals who abstain or who drink alcohol to excess. Researchers are working to explain this observation in molecular and nutritional terms. Moderate ethanol intake from any type of beverage improves lipoprotein metabolism and lowers cardiovascular mortality risk. The question now is whether wine, particularly red wine with its abundant content of phenolic acids and polyphenols, confers additional health benefits. Discovering the nutritional properties of wine is a challenging task, which requires that the biological actions and bioavailability of the >200 individual phenolic compounds be documented and interpreted within the societal factors that stratify wine consumption and the myriad effects of alcohol alone. Further challenge arises because the health benefits of wine address the prevention of slowly developing diseases for which validated biomarkers are rare. Thus, although the benefits of the polyphenols from fruits and vegetables are increasingly accepted, consensus on wine is developing more slowly. Scientific research has demonstrated that the molecules present in grapes and in wine alter cellular metabolism and signaling, which is consistent mechanistically with reducing arterial disease. Future research must address specific mechanisms both of alcohol and of polyphenolic action and develop biomarkers of their role in disease prevention in individuals.",
"title": ""
},
{
"docid": "ba959139c1fc6324f3c32a4e4b9bb16c",
"text": "The short-term unit commitment problem is traditionally solved as a single-objective optimization problem with system operation cost as the only objective. This paper presents multi-objectivization of the short-term unit commitment problem in uncertain environment by considering reliability as an additional objective along with the economic objective. The uncertainties occurring due to unit outage and load forecast error are incorporated using loss of load probability (LOLP) and expected unserved energy (EUE) reliability indices. The multi-objectivized unit commitment problem in uncertain environment is solved using our earlier proposed multi-objective evolutionary algorithm [1]. Simulations are performed on a test system of 26 thermal generating units and the results obtained are benchmarked against the study [2] where the unit commitment problem was solved as a reliability-constrained single-objective optimization problem. The simulation results demonstrate that the proposed multi-objectivized approach can find solutions with considerably lower cost than those obtained in the benchmark. Further, the efficiency and consistency of the proposed algorithm for multi-objectivized unit commitment problem is demonstrated by quantitative performance assessment using hypervolume indicator.",
"title": ""
},
{
"docid": "bb7bd1a00239a0b8b875ca03ccf218c3",
"text": "Objectives: To assess the effect of milk with honey in childre n undergoing tonsillectomy on bleeding, pain and wound healing. Methods: The experimental study wit contol group was conduct ed out ear, nose and throat clinic and outpatient clinic in a public hospital. In the study, it were studied with children undergoing tonsillectomy who are 6-17 years of age (N=68). The standardized natural flowe r honey was applied to children in the experimental group after tonsillectomy, every day, in addition to the standard diet in clinical routine. The children wer e assigned randomly the experimental and control groups accord ing to the operation sequence. In collecting the da ta, questionnaire, pain, wound healing and visual analo g scales was used. The data were analyzed by percen tage distributions, means, chi-square test, variance ana lysis, and correlation analysis. It was depended on ethical principles. Results: In the study, it was determined that not bleeding, is significant less pain and the level of wound he aling of children in group milk with honey than children i milk group (p<.001). It has been found that a st rong negative correlation between the level of pain and wound healing of children in milk with honey and mi lk groups (p<.001). Conclusions: It has been determined that milk with honey was ef fective in prevent bleeding, reducing pain, and accelerate wound healing. Honey, which is a natural nutrient is a safe care tool that can be applied i n children undergoing tonsillectomy without diabetes and aller gic to honey and oral feeding in addition to routin e clinical the diet.",
"title": ""
},
{
"docid": "20373fff73f01977417e9aaf1d88a53f",
"text": "In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video-and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.",
"title": ""
},
{
"docid": "131f119361582f0d538413680dfafd9d",
"text": "In this paper, the problems of current web search engines are analyzed, and the need for a new design is justified. Some ideas on how to improve current web search engines are presented, and then an adaptive method for web meta-search engines with a multi-agent specially the mobile agents is presented to make search engines work more efficiently. In the method, the cooperation between stationary and mobile agents is used to make more efficiency. The meta-search engine gives the user needed documents based on the multi-stage mechanism. The merge of the results obtained from the search engines in the network is done in parallel. Using a reduction parallel algorithm, the efficiency of this method is increased. Furthermore, a feedback mechanism gives the meta-search engine the user’s suggestions about the found documents, which leads to a new query using a genetic algorithm. In the new search stage, more relevant documents are given to the user. The practical experiments were performed in Aglets programming environment. The results achieved from these experiments confirm the efficiency and adaptability of the method.",
"title": ""
},
{
"docid": "63d301040ccb7051de18af0e2d6d93ba",
"text": "Image inpainting refers to the process of restoring missing or damaged areas in an image. This field of research has been very active over recent years, boosted by numerous applications: restoring images from scratches or text overlays, loss concealment in a context of impaired image transmission, object removal in a context of editing, or disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras. Although earlier work dealing with disocclusion has been published in [1], the term inpainting first appeared in [2] by analogy with a process used in art restoration.",
"title": ""
},
{
"docid": "c393b4afc1348e88edaa9eff07fdbe45",
"text": "The majority of the research related to visual recognition has so far focused on bottom-up analysis, where the input is processed in a cascade of cortical regions that analyze increasingly complex information. Gradually more studies emphasize the role of top-down facilitation in cortical analysis, but it remains something of a mystery how such processing would be initiated. After all, top-down facilitation implies that high-level information is activated earlier than some relevant lower-level information. Building on previous studies, I propose a specific mechanism for the activation of top-down facilitation during visual object recognition. The gist of this hypothesis is that a partially analyzed version of the input image (i.e., a blurred image) is projected rapidly from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC expectations about the most likely interpretations of the input image, which are then back-projected as an initial guess to the temporal cortex to be integrated with the bottom-up analysis. The top-down process facilitates recognition by substantially limiting the number of object representations that need to be considered. Furthermore, such a rapid mechanism may provide critical information when a quick response is necessary.",
"title": ""
},
{
"docid": "603a4d4037ce9fc653d46473f9085d67",
"text": "In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.",
"title": ""
},
{
"docid": "78e4a57eff6ffc7ad012639933f8ebcc",
"text": "In this paper, we describe active and semi-supervised learning methods for reducing the labeling effort for spoken language understanding. In a goal-oriented call routing system, understanding the intent of the user can be framed as a classification problem. State of the art statistical classification systems are trained using a large number of human-labeled utterances, preparation of which is labor intensive and time consuming. Active learning aims to minimize the number of labeled utterances by automatically selecting the utterances that are likely to be most informative for labeling. The method for active learning we propose, inspired by certainty-based active learning, selects the examples that the classifier is the least confident about. The examples that are classified with higher confidence scores (hence not selected by active learning) are exploited using two semi-supervised learning methods. The first method augments the training data by using the machine-labeled classes for the unlabeled utterances. The second method instead augments the classification model trained using the human-labeled utterances with the machine-labeled ones in a weighted manner. We then combine active and semi-supervised learning using selectively sampled and automatically labeled data. This enables us to exploit all collected data and alleviates the data imbalance problem caused by employing only active or semi-supervised learning. We have evaluated these active and semi-supervised learning methods with a call classification system used for AT&T customer care. Our results indicate that it is possible to reduce human labeling effort significantly. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f",
"text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.",
"title": ""
},
{
"docid": "08bef09a01414bafcbc778fea85a7c0a",
"text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.",
"title": ""
}
] |
scidocsrr
|
bde7b86f912c0b9f51107f1cdafd9552
|
Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline
|
[
{
"docid": "f5f56d680fbecb94a08d9b8e5925228f",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "beb1c8ba8809d1ac409584bea1495654",
"text": "Multimodal information processing has received considerable attention in recent years. The focus of existing research in this area has been predominantly on the use of fusion technology. In this paper, we suggest that cross-modal association can provide a new set of powerful solutions in this area. We investigate different cross-modal association methods using the linear correlation model. We also introduce a novel method for cross-modal association called Cross-modal Factor Analysis (CFA). Our earlier work on Latent Semantic Indexing (LSI) is extended for applications that use off-line supervised training. As a promising research direction and practical application of cross-modal association, cross-modal information retrieval where queries from one modality are used to search for content in another modality using low-level features is then discussed in detail. Different association methods are tested and compared using the proposed cross-modal retrieval system. All these methods achieve significant dimensionality reduction. Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads. The CFA method achieves 91.1% detection accuracy, while LSI and Canonical Correlation Analysis (CCA) achieve 66.1% and 73.9% accuracy, respectively. As shown by experiments, cross-modal association provides many useful benefits, such as robust noise resistance and effective feature selection. Compared to CCA and LSI, the proposed CFA shows several advantages in analysis performance and feature usage. Its capability in feature selection and noise resistance also makes CFA a promising tool for many multimedia analysis applications.",
"title": ""
},
{
"docid": "c6b1ad47687dbd86b28a098160f406bb",
"text": "The development of a 10-item self-report scale (EPDS) to screen for Postnatal Depression in the community is described. After extensive pilot interviews a validation study was carried out on 84 mothers using the Research Diagnostic Criteria for depressive illness obtained from Goldberg's Standardised Psychiatric Interview. The EPDS was found to have satisfactory sensitivity and specificity, and was also sensitive to change in the severity of depression over time. The scale can be completed in about 5 minutes and has a simple method of scoring. The use of the EPDS in the secondary prevention of Postnatal Depression is discussed.",
"title": ""
},
{
"docid": "246bbb92bc968d20866b8c92a10f8ac7",
"text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.",
"title": ""
},
{
"docid": "8518dc45e3b0accfc551111489842359",
"text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.",
"title": ""
},
{
"docid": "a63f9b27e27393bb432198f18c3d89e1",
"text": "Accounting information system had been widely used by many organizations to automate and integrate their business operations .The main objective s of many businesses to adopt this system are to improve their business efficiency and increase competitiveness. The qualitative characteristic of any Accounting Information System can be maintained if there is a sound internal control system. Internal control is run to ensure the achievement of operational goals and performance. Therefore the purpose of this study is to examine the efficiency of Accounting Information System on performance measures using the secondary data in which it was found that accounting information system is of great importance to both businesses and organization in which it helps in facilitating management decision making, internal controls ,quality of the financial report ,and it facilitates the company’s transaction and it also plays an important role in economic system, and the study recommends that businesses, firms and organization should adopt the use of AIS because adequate accounting information is essential for every effective decision making process and adequate information is possible if accounting information systems are run efficiently also, efficient Accounting Information Systems ensures that all levels of management get sufficient, adequate, relevant and true information for planning and controlling activities of the business organization.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "df0381c129339b1131897708fc00a96c",
"text": "We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.",
"title": ""
},
{
"docid": "65a7e691f8bb6831c269cf5770271325",
"text": "Seven types of evidence are reviewed that indicate that high subjective wellbeing (such as life satisfaction, absence of negative emotions, optimism, and positive emotions) causes better health and longevity. For example, prospective longitudinal studies of normal populations provide evidence that various types of subjective well-being such as positive affect predict health and longevity, controlling for health and socioeconomic status at baseline. Combined with experimental human and animal research, as well as naturalistic studies of changes of subjective well-being and physiological processes over time, the case that subjective well-being influences health and longevity in healthy populations is compelling. However, the claim that subjective well-being lengthens the lives of those with certain diseases such as cancer remains controversial. Positive feelings predict longevity and health beyond negative feelings. However, intensely aroused or manic positive affect may be detrimental to health. Issues such as causality, effect size, types of subjective well-being, and statistical controls are discussed.",
"title": ""
},
{
"docid": "c64d9727c98e8c5cdbb3445918eb32c7",
"text": "This paper describes an industrial project aimed at migrating legacy COBOL programs running on an IBM-AS400 to Java for running in an open environment. The unique aspect of this migration is the reengineering of the COBOL code prior to migration. The programs were in their previous form hardwired to the AS400 screens as well as to the AS400 file system. The goal of the reengineering project was to free the code from these proprietary dependencies and to reduce them to the pure business logic. Disentangling legacy code from it's physical environment is a major prerequisite to converting that code to another environment. The goal is the virtualization of program interfaces. That was accomplished here in a multistep automated process which led to small, environment independent COBOL modules which could be readily converted over into Java packages. The pilot project has been completed for a sample subset of the production planning and control system. The conversion to Java is pending the test of the reengineered COBOL modules.",
"title": ""
},
{
"docid": "6a1f1345a390ff886c95a57519535c40",
"text": "BACKGROUND\nThe goal of this pilot study was to evaluate the effects of the cognitive-restructuring technique 'lucid dreaming treatment' (LDT) on chronic nightmares. Becoming lucid (realizing that one is dreaming) during a nightmare allows one to alter the nightmare storyline during the nightmare itself.\n\n\nMETHODS\nAfter having filled out a sleep and a posttraumatic stress disorder questionnaire, 23 nightmare sufferers were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucidity exercises. Participants filled out the same questionnaires 12 weeks after the intervention (follow-up).\n\n\nRESULTS\nAt follow-up the nightmare frequency of both treatment groups had decreased. There were no significant changes in sleep quality and posttraumatic stress disorder symptom severity. Lucidity was not necessary for a reduction in nightmare frequency.\n\n\nCONCLUSIONS\nLDT seems effective in reducing nightmare frequency, although the primary therapeutic component (i.e. exposure, mastery, or lucidity) remains unclear.",
"title": ""
},
{
"docid": "092239f41a6e216411174e5ed9dceee2",
"text": "In this paper, we propose a simple but effective specular highlight removal method using a single input image. Our method is based on a key observation the maximum fraction of the diffuse color component (so called maximum diffuse chromaticity in the literature) in local patches in color images changes smoothly. Using this property, we can estimate the maximum diffuse chromaticity values of the specular pixels by directly applying low-pass filter to the maximum fraction of the color components of the original image, such that the maximum diffuse chromaticity values can be propagated from the diffuse pixels to the specular pixels. The diffuse color at each pixel can then be computed as a nonlinear function of the estimated maximum diffuse chromaticity. Our method can be directly extended for multi-color surfaces if edge-preserving filters (e.g., bilateral filter) are used such that the smoothing can be guided by the maximum diffuse chromaticity. But maximum diffuse chromaticity is to be estimated. We thus present an approximation and demonstrate its effectiveness. Recent development in fast bilateral filtering techniques enables our method to run over 200× faster than the state-of-the-art on a standard CPU and differentiates our method from previous work.",
"title": ""
},
{
"docid": "a49c8e6f222b661447d1de32e29d0f16",
"text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "f1131f6f25601c32fefc09c38c7ad84b",
"text": "We create a new online reduction of multiclass classification to binary classification for which training and prediction time scale logarithmically with the number of classes. We show that several simple techniques give rise to an algorithm which is superior to previous logarithmic time classification approaches while competing with one-against-all in space. The core construction is based on using a tree to select a small subset of labels with high recall, which are then scored using a one-against-some structure with high precision.",
"title": ""
},
{
"docid": "1c90adf8ec68ff52e777b2041f8bf4c4",
"text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.",
"title": ""
},
{
"docid": "c955e63d5c5a30e18c008dcc51d1194b",
"text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "96471eda3162fa5bdac40220646e7697",
"text": "A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.",
"title": ""
},
{
"docid": "595e68cfcf7b2606f42f2ad5afb9713a",
"text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.",
"title": ""
},
{
"docid": "8869e69647a16278d7a2ac26316ec5d0",
"text": "Despite significant progress, most existing visual dictionary learning methods rely on image descriptors alone or together with class labels. However, Web images are often associated with text data which may carry substantial information regarding image semantics, and may be exploited for visual dictionary learning. This paper explores this idea by leveraging relational information between image descriptors and textual words via co-clustering, in addition to information of image descriptors. Existing co-clustering methods are not optimal for this problem because they ignore the structure of image descriptors in the continuous space, which is crucial for capturing visual characteristics of images. We propose a novel Bayesian co-clustering model to jointly estimate the underlying distributions of the continuous image descriptors as well as the relationship between such distributions and the textual words through a unified Bayesian inference. Extensive experiments on image categorization and retrieval have validated the substantial value of the proposed joint modeling in improving visual dictionary learning, where our model shows superior performance over several recent methods.",
"title": ""
}
] |
scidocsrr
|
9a6cf37a84603190818d14ce86bde4ed
|
A Knowledge-Intensive Model for Prepositional Phrase Attachment
|
[
{
"docid": "3ac2f2916614a4e8f6afa1c31d9f704d",
"text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.",
"title": ""
}
] |
[
{
"docid": "e342178b5c8ee8a48add15fefa0ef5f8",
"text": "A new scheme is proposed for the dual-band operation of the Wilkinson power divider/combiner. The dual band operation is achieved by attaching two central transmission line stubs to the conventional Wilkinson divider. It has simple structure and is suitable for distributed circuit implementation.",
"title": ""
},
{
"docid": "5e2b8d3ed227b71869550d739c61a297",
"text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.",
"title": ""
},
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "bfdfac980d1629f85f5bd57705b11b19",
"text": "Deduplication is an approach of avoiding storing data blocks with identical content, and has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, it remains challenging to deploy deduplication in a real system, such as a cloud platform, where VM images are regularly inserted and retrieved. We propose LiveDFS, a live deduplication file system that enables deduplication storage of VM images in an open-source cloud that is deployed under low-cost commodity hardware settings with limited memory footprints. LiveDFS has several distinct features, including spatial locality, prefetching of metadata, and journaling. LiveDFS is POSIXcompliant and is implemented as a Linux kernel-space file system. We deploy our LiveDFS prototype as a storage layer in a cloud platform based on OpenStack, and conduct extensive experiments. Compared to an ordinary file system without deduplication, we show that LiveDFS can save at least 40% of space for storing VM images, while achieving reasonable performance in importing and retrieving VM images. Our work justifies the feasibility of deploying LiveDFS in an open-source cloud.",
"title": ""
},
{
"docid": "774690eaef2d293320df0c162f44af95",
"text": "Having a long historical past in traditional Chinese medicine, Ganoderma Lucidum (G. Lucidum) is a type of mushroom believed to extend life and promote health. Due to the increasing consumption pattern, it has been cultivated and marketed intensively since the 1970s. It is claimed to be effective in the prevention and treatment of many diseases, and in addition, it exerts anticancer properties. Almost all the data on the benefits of G. Lucidum are based on laboratory and preclinical studies. The few clinical studies conducted are questionable. Nevertheless, when the findings obtained from laboratory studies are considered, it turns that G. Lucidum is likely to have some benefits for cancer patients. What is important at this point is to determine the components that will provide these benefits, and use them in drug development, after testing their reliability. In conclusion, it would be the right approach to abstain from using and incentivizing this product, until its benefits and harms are set out clearly, by considering its potential side effects.",
"title": ""
},
{
"docid": "9563b47a73e41292599c368e1dfcd40a",
"text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.",
"title": ""
},
{
"docid": "242977c8b2a5768b18fc276309407d60",
"text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.",
"title": ""
},
{
"docid": "9180fe4fc7020bee9a52aa13de3adf54",
"text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.",
"title": ""
},
{
"docid": "9e9c55a81d6fe980515c9b93dfe0d265",
"text": "Single-cell RNA-seq has become routine for discovering cell types and revealing cellular diversity, but archived human brain samples still pose a challenge to current high-throughput platforms. We present STRT-seq-2i, an addressable 9600-microwell array platform, combining sampling by limiting dilution or FACS, with imaging and high throughput at competitive cost. We applied the platform to fresh single mouse cortical cells and to frozen post-mortem human cortical nuclei, matching the performance of a previous lower-throughput platform while retaining a high degree of flexibility, potentially also for other high-throughput applications.",
"title": ""
},
{
"docid": "4f509a4fdc6bbffa45c214bc9267ea79",
"text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.",
"title": ""
},
{
"docid": "67d8680a41939c58a866f684caa514a3",
"text": "Triboelectric effect works on the principle of triboelectrification and electrostatic induction. This principle is used to generate voltage by converting mechanical energy into electrical energy. This paper presents the charging behavior of different capacitors by rubbing of two different materials using mechanical motion. The numerical and simulation modeling, describes the charging performance of a TENG with a bridge rectifier. It is also demonstrated that a 10 μF capacitor can be charged to a maximum of 24.04 volt in 300 seconds and it is also provide 2800 μJ/cm3 maximum energy density. Such system can be used for ultralow power electronic devices, biomedical devices and self-powered appliances etc.",
"title": ""
},
{
"docid": "6e016c311f77963be6ba7ec6e29a44f0",
"text": "Unmanned Aerial Vehicles (UAVs) have been recently considered as means to provide enhanced coverage or relaying services to mobile users (MUs) in wireless systems with limited or no infrastructure. In this paper, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities. The system aims at minimizing the total mobile energy consumption while satisfying quality of service requirements of the offloaded mobile application. Offloading is enabled by uplink and downlink communications between the mobile devices and the UAV that take place by means of frequency division duplex (FDD) via orthogonal or non-orthogonal multiple access (NOMA) schemes. The problem of jointly optimizing the bit allocation for uplink and downlink communication as well as for computing at the UAV, along with the cloudlet’s trajectory under latency and UAV’s energy budget constraints is formulated and addressed by leveraging successive convex approximation (SCA) strategies. Numerical results demonstrate the significant energy savings that can be accrued by means of the proposed joint optimization of bit allocation and cloudlet’s trajectory as compared to local mobile execution as well as to partial optimization approaches that design only the bit allocation or the cloudlet’s trajectory.",
"title": ""
},
{
"docid": "df9d74df931a596b7025150d11a18364",
"text": "In recent years, ''gamification'' has been proposed as a solution for engaging people in individually and socially sustainable behaviors, such as exercise, sustainable consumption, and education. This paper studies demographic differences in perceived benefits from gamification in the context of exercise. On the basis of data gathered via an online survey (N = 195) from an exercise gamification service Fitocracy, we examine the effects of gender, age, and time using the service on social, hedonic, and utilitarian benefits and facilitating features of gamifying exercise. The results indicate that perceived enjoyment and usefulness of the gamification decline with use, suggesting that users might experience novelty effects from the service. The findings show that women report greater social benefits from the use of gamification. Further, ease of use of gamification is shown to decline with age. The implications of the findings are discussed. The question of how we understand gamer demographics and gaming behaviors, along with use cultures of different demographic groups, has loomed over the last decade as games became one of the main veins of entertainment and consumer culture (Yi, 2004). The deeply established perception of games being a field of entertainment dominated by young males has been challenged. Nowadays, digital gaming is a mainstream activity with broad demographics. The gender divide has been diminishing, the age span has been widening, and the average age is higher than An illustrative study commissioned by PopCap (Information Solutions Group, 2011) reveals that it is actually women in their 30s and 40s who play the popular social games on social networking services (see e.g. most – outplaying men and younger people. It is clear that age and gender perspectives on gaming activities and motivations require further scrutiny. The expansion of the game industry and the increased competition within the field has also led to two parallel developments: (1) using game design as marketing (Hamari & Lehdonvirta, 2010) and (2) gamification – going beyond what traditionally are regarded as games and implementing game design there often for the benefit of users. For example, services such as Mindbloom, Fitocracy, Zombies, Run!, and Nike+ are aimed at assisting the user toward beneficial behavior related to lifestyle and health choices. However, it is unclear whether we can see age and gender discrepancies in use of gamified services similar to those in other digital gaming contexts. The main difference between games and gamifica-tion is that gamification is commonly …",
"title": ""
},
{
"docid": "5dde43ab080f516c0b485fcd951bf9e1",
"text": "Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this paper, within the classes of mechanisms oblivious of the database and the queriesqueries beyond the global sensitivity, we characterize the fundamental tradeoff between privacy and utility in differential privacy, and derive the optimal ϵ-differentially private mechanism for a single realvalued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a geometric mixture of uniform probability distributions, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the parameter of the optimal staircase mechanism for ℓ<sup>1</sup> and ℓ<sup>2</sup> cost functions. Comparing the optimal performances with those of the usual Laplacian mechanism, we show that in the high privacy regime (ϵ is small), the Laplacian mechanism is asymptotically optimal as ϵ → 0; in the low privacy regime (ϵ is large), the minimum magnitude and second moment of noise are Θ(Δe<sup>(-ϵ/2)</sup>) and Θ(Δ<sup>2</sup>e<sup>(-2ϵ/3)</sup>) as ϵ → +∞, respectively, while the corresponding figures when using the Laplacian mechanism are Δ/ϵ and 2Δ<sup>2</sup>/ϵ<sup>2</sup>, where Δ is the sensitivity of the query function. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.",
"title": ""
},
{
"docid": "a7bbf188c7219ff48af391a5f8b140b8",
"text": "The paper presents the results of studies concerning the designation of COD fraction in raw wastewater. The research was conducted in three mechanical-biological sewage treatment plants. The results were compared with data assumed in the ASM models. During the investigation, the following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS, and in organic suspension non-biodegradable XI. The methodology for determining the COD fraction was based on the ATVA 131guidelines. The real concentration of fractions in raw wastewater and the percentage of each fraction in total COD are different from data reported in the literature.",
"title": ""
},
{
"docid": "e56276ed066369ffce7fe882dfde70f8",
"text": "In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set.",
"title": ""
},
{
"docid": "d5758c68110a604c7af4a68faba32d1d",
"text": "Two experiments explore the validity of conceptualizing musical beats as auditory structural features and the potential for increases in tempo to lead to greater sympathetic arousal, measured using skin conductance. In the first experiment, fastand slow-paced rock and classical music excerpts were compared to silence. As expected, skin conductance response (SCR) frequency was greater during music processing than during silence. Skin conductance level (SCL) data showed that fast-paced music elicits greater activation than slow-paced music. Genre significantly interacted with tempo in SCR frequency, with faster tempo increasing activation for classical music while decreasing it for rock music. A second experiment was conducted to explore the possibility that the presumed familiarity of the genre led to this interaction. Although further evidence was found for conceptualizing musical beat onsets as auditory structure, the familiarity explanation was not supported. Music Effects on Arousal 2 Effects of Music Genre and Tempo on Physiological Arousal Music communicates many different types of messages through the combination of sound and lyric (Sellnow & Sellnow, 2001). For example, music can be used to exchange political information (e.g., Frith, 1981; Stewart, Smith, & Denton, 1989). Music can also establish and portray a selfor group-image (Arnett, 1991, 1992; Dehyle, 1998; Kendall & Carterette, 1990; Dillman Carpentier, Knobloch & Zillmann, 2003; Manuel, 1991; McLeod, 1999; see also Hansen & Hansen, 2000). Pertinent to this investigation, music can communicate emotional information (e.g., Juslin & Sloboda, 2001). In short, music is a form of “interhuman communication in which humanly organized, non-verbal sound is perceived as vehiculating primarily affective (emotional) and/or gestural (corporeal) patterns of cognition” (Tagg, 2002, p. 5). This idea of music as communication reaches the likes of audio production students, who are taught the concept of musical underscoring, or adding music to “enhance information or emotional content” in a wide variety of ways from establishing a specific locale to intensifying action (Alten, 2005, p. 360). In this realm, music becomes a key instrument in augmenting or punctuating a given message. Given the importance of arousal and/or activation in most theories of persuasion and information processing, an understanding of how music can be harnessed to instill arousal is arguably of benefit to media producers looking to utilize every possible tool when creating messages, whether the messages are commercial appeals, promotional announcements or disease-prevention messages. It is with the motivation of harnessing the psychological response to music for practical application that two experiments were conducted to test whether message creators can rely on musical tempo as a way to increase sympathetic nervous system Music Effects on Arousal 3 activation in a manner similar to other structural features of media (i.e., cuts, edits, sound effects, voice changes). Before explaining the original work, a brief description of the current state of the literature on music and emotion is offered. Different Approaches in Music Psychology Although there is little doubt that music ‘vehiculates’ emotion, several debates exist within the music psychology literature about exactly how that process is best conceptualized and empirically approached (e.g., Bever, 1988; Gaver & Mandler, 1987; Juslin & Sloboda, 2001; Lundin, 1985; Sloboda, 1991). The primary conceptual issue revolves around two different schools of thought (Scherer & Zentner, 2001). The first, the cognitivist approach, describes emotional response to music as resulting from the listener’s cognitive recognition of cues within the composition itself. Emotivists, on the other hand, eliminate the cognitive calculus required by cue recognition in the score, instead describing emotional response to music as a feeling of emotion. Although both approaches acknowledge a cultural or social influence in how the music is interpreted (e.g., Krumhansl, 1997; Peretz, 2001), the conceptual chasm between emotion as being either expressed or elicited by a piece of music is wide indeed. A second issue in the area of music psychology concerns a difference in the empirical approach present among emotion scholars writ large. Some focus their explorations on specific, discrete affective states (i.e., joy, fear, disgust, etc.), often labeled as the experience of basic emotions (Ortony et al., 1988; Thayer, 1989; Zajonc, 1980). Communication scholars such as Nabi (1999, 2003) and Newhagen (1998) have also found it fruitful to explore unique affective states resulting from mediated messages, driven by the understanding that “each emotion expresses a different relational meaning Music Effects on Arousal 4 that motivates the use of mental and/or physical resources in ways consistent with the emotion’s action tendency” (Nabi, 2003, p. 226; also see Wirth & Schramm, 2005 for review). This approach is also well represented by studies exploring human reactions to music (see Juslin & Laukka, 2003 for review). Other emotion scholars design studies where the focus is placed not on the discrete identifier assigned to a certain feeling-state by a listener, but rather the extent to which different feeling-states share common factors or dimensions. The two most commonly studied dimensions are valence—a term given to the relative positive/negative hedonic value, and arousal—the intensity or level to which that hedonic value is experienced. The centrality of these two dimensions in the published literature is due to the consistency with which they account for the largest amount of predictive variance across a wide variety of dependent variables (Osgood, Suci & Tannenbuam, 1957; Bradley, 1994; Reisenzein, 1994). This dimensional approach to emotional experience is well-represented by articles in the communication literature exploring the combined impact of valence and arousal on memory (Lang, Bolls, Potter & Kawahara, 1999; Sundar & Kalyanaraman, 2004), liking (Yoon, Bolls, & Lang, 1998), and persuasive appeal (Yoon et al., 1998; Potter, LaTour, Braun-LaTour & Reichert, 2006). When surveying the music psychology literature for studies utilizing the dimensional emotions approach, however, results show that the impact of music on hedonic valence are difficult to consistently predict—arguably due to contextual, experiential or mood-state influences of the listener combined with interpretational differences of the song composers and performers (Bigand, Filipic, & Lalitte, 2005; Cantor & Zillmann, 1973; Gabrielsson & Lindström, 2001; Kendall & Carterette, 1990; Leman, 2003; Lundin, 1985). Music Effects on Arousal 5 On the other hand, the measured effects of music on the arousal dimension, while not uniform, are more consistent across studies (see Scherer & Zentner, 2001). For example, numerous experiments have noted the relaxation potential of music—either using compositions pre-tested as relaxing or self-identified by research participants as such. In Bartlett’s (1996) review of music studies using physiological measures, a majority of studies measuring muscle tension found relaxing music to reduce it. Interestingly, slightly more than half of the studies that measured skin temperature found relaxing music to increase it. Pelletier (2004) went beyond reviewing studies individually, conducting a statistical meta-analysis of 22 experiments. Conclusions showed that music alone, as well as used in tandem with relaxation techniques, significantly decreased perceived arousal and physiological activation. However, the amount of decrease significantly varied by age, stressor, musical preference, and previous music experience of the participant. These caveats provide possible explanations for the few inconsistent findings across individual studies that show either little or no effects of relaxing music (e.g., Davis-Rollans & Cunningham, 1987; Robb, Nichols, Rutan, & Bishop, et al., 1995; Strauser, 1997; see Standley, 1991 for review) or that show listening to relaxing music yields higher perceived arousal compared to the absence of music (Davis & Thaut, 1989). Burns, Labbé, Williams, and McCall (1999) relied on both self-report and physiological responses to the musical selections to explore music’s ability to generate states of relaxation. The researchers used a predetermined classical excerpt, a predetermined rock excerpt, an excerpt from a “relaxing” selection chosen by each participant, and a condition of sitting in silence. Burns et al. (1999) found that, within Music Effects on Arousal 6 groups, both finger temperature and skin conductance decreased over time. Across emotional conditions, self-reported relaxation was lowest for rock listeners and highest for participants in the self-selection and silence conditions. However, no significant between-group physiological differences were found. Rickard (2004) also combined self-reports of emotional impact, enjoyment, and familiarity with psychophysiological measures in evaluating arousal effects of music. Psychophysiological measures included skin conductance responses, chills, skin temperature, and muscle tension. Stimuli included relaxing music, music predetermined to be arousing but not emotionally powerful, self-selected emotionally-powerful music, and an emotionally-powerful film scene. Rickard found that music participants had selfidentified as emotionally powerful led to the greatest increases in skin conductance and chills, in addition to higher ratings on the self-reported measures. No correlation was found between these effects and participant gender or musical training. Krumhansl (1997) explored how music affects the peripheral nervous system in eliciting emotions in college-aged music students. Classical music selections approximately 180-seconds long were chosen which expressed sadness, happiness or fear. While listening, ha",
"title": ""
},
{
"docid": "46fb354d3c85325312fe4e03d998632c",
"text": "Driver distraction has been identified as a highpriority topic by the National Highway Traffic Safety Administration, reflecting concerns about the compatibility of certain in-vehicle technologies with the driving task, whether drivers are making potentially dangerous decisions about when to interact with invehicle technologies while driving, and that these trends may accelerate as new technologies continue to become available. Since 1991, NHTSA has conducted research to understand the factors that contribute to driver distraction and to develop methods to assess the extent to which in-vehicle technologies may contribute to crashes. This paper summarizes significant findings from past NHTSA research in the area of driver distraction and workload, provides an overview of current ongoing research, and describes upcoming research that will be conducted, including research using the National Advanced Driving Simulator and work to be conducted at NHTSA’s Vehicle Research and Test Center. Preliminary results of the ongoing research are also presented.",
"title": ""
},
{
"docid": "27c125643ffc8f1fee7ed5ee22025c01",
"text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called IMAGENET-P which enables researchers to benchmark a classifier’s robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.",
"title": ""
},
{
"docid": "96029f6daa55fff7a76ab9bd48ebe7b9",
"text": "According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.",
"title": ""
}
] |
scidocsrr
|
6482a8af53ac20d4bd6148d63200ed3c
|
Design a novel electronic medical record system for regional clinics and health centers in China
|
[
{
"docid": "8ae8cb422f0f79031b8e19e49b857356",
"text": "CSCW as a field has been concerned since its early days with healthcare, studying how healthcare work is collaboratively and practically achieved and designing systems to support that work. Reviewing literature from the CSCW Journal and related conferences where CSCW work is published, we reflect on the contributions that have emerged from this work. The analysis illustrates a rich range of concepts and findings towards understanding the work of healthcare but the work on the larger policy level is lacking. We argue that this presents a number of challenges for CSCW research moving forward: in having a greater impact on larger-scale health IT projects; broadening the scope of settings and perspectives that are studied; and reflecting on the relevance of the traditional methods in this field - namely workplace studies - to meet these challenges.",
"title": ""
}
] |
[
{
"docid": "94784bc9f04dbe5b83c2a9f02e005825",
"text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.",
"title": ""
},
{
"docid": "3f88da8f70976c11bf5bab5f1d438d58",
"text": "The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset.",
"title": ""
},
{
"docid": "57fd4b59ffb27c35faa6a5ee80001756",
"text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.",
"title": ""
},
{
"docid": "0923e899e5d7091a6da240db21eefad2",
"text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.",
"title": ""
},
{
"docid": "ccfb2821c51a2fad5b34c3037497cb66",
"text": "The next decade will see a deep transformation of industrial applications by big data analytics, machine learning and the internet of things. Industrial applications have a number of unique features, setting them apart from other domains. Central for many industrial applications in the internet of things is time series data generated by often hundreds or thousands of sensors at a high rate, e.g. by a turbine or a smart grid. In a first wave of applications this data is centrally collected and analyzed in Map-Reduce or streaming systems for condition monitoring, root cause analysis, or predictive maintenance. The next step is to shift from centralized analysis to distributed in-field or in situ analytics, e.g in smart cities or smart grids. The final step will be a distributed, partially autonomous decision making and learning in massively distributed environments. In this talk I will give an overview on Siemens’ journey through this transformation, highlight early successes, products and prototypes and point out future challenges on the way towards machine intelligence. I will also discuss architectural challenges for such systems from a Big Data point of view. Bio.Michael May is Head of the Technology Field Business Analytics & Monitoring at Siemens Corporate Technology, Munich, and responsible for eleven research groups in Europe, US, and Asia. Michael is driving research at Siemens in data analytics, machine learning and big data architectures. In the last two years he was responsible for creating the Sinalytics platform for Big Data applications across Siemens’ business. Before joining Siemens in 2013, Michael was Head of the Knowledge Discovery Department at the Fraunhofer Institute for Intelligent Analysis and Information Systems in Bonn, Germany. In cooperation with industry he developed Big Data Analytics applications in sectors ranging from telecommunication, automotive, and retail to finance and advertising. Between 2002 and 2009 Michael coordinated two Europe-wide Data Mining Research Networks (KDNet, KDubiq). He was local chair of ICML 2005, ILP 2005 and program chair of the ECML PKDD Industrial Track 2015. Michael did his PhD on machine discovery of causal relationships at the Graduate Programme for Cognitive Science at the University of Hamburg. Machine Learning Challenges at Amazon",
"title": ""
},
{
"docid": "d07a10da23e0fc18b473f8a30adaebfb",
"text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.",
"title": ""
},
{
"docid": "89263084f29469d1c363da55c600a971",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "762855af09c1f80ec85d6de63223bc53",
"text": "In this paper, we propose a framework for isolating text regions from natural scene images. The main algorithm has two functions: it generates text region candidates, and it verifies of the label of the candidates (text or non-text). The text region candidates are generated through a modified K-means clustering algorithm, which references texture features, edge information and color information. The candidate labels are then verified in a global sense by the Markov Random Field model where collinearity weight is added as long as most texts are aligned. The proposed method achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database.",
"title": ""
},
{
"docid": "8e3f8fca93ca3106b83cf85d20c061ca",
"text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.",
"title": ""
},
{
"docid": "852c85ecbed639ea0bfe439f69fff337",
"text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.",
"title": ""
},
{
"docid": "838701b64b27fe1d65bd23a124ebcef7",
"text": "OBJECTIVES\nInternet can accelerate information exchange. Social networks are the most accessed especially Facebook. This kind of networks might create dependency with several negative consequences in people's life. The aim of this study was to assess potential association between Facebook dependence and poor sleep quality.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nA cross sectional study was performed enrolling undergraduate students of the Universidad Peruana de Ciencias Aplicadas, Lima, Peru. The Internet Addiction Questionnaire, adapted to the Facebook case, and the Pittsburgh Sleep Quality Index, were used. A global score of 6 or greater was defined as the cutoff to determine poor sleep quality. Generalized linear model were used to determine prevalence ratios (PR) and 95% confidence intervals (95%CI). A total of 418 students were analyzed; of them, 322 (77.0%) were women, with a mean age of 20.1 (SD: 2.5) years. Facebook dependence was found in 8.6% (95% CI: 5.9%-11.3%), whereas poor sleep quality was present in 55.0% (95% CI: 50.2%-59.8%). A significant association between Facebook dependence and poor sleep quality mainly explained by daytime dysfunction was found (PR = 1.31; IC95%: 1.04-1.67) after adjusting for age, sex and years in the faculty.\n\n\nCONCLUSIONS\nThere is a relationship between Facebook dependence and poor quality of sleep. More than half of students reported poor sleep quality. Strategies to moderate the use of this social network and to improve sleep quality in this population are needed.",
"title": ""
},
{
"docid": "deed8b565b77f92d91170c001b512e96",
"text": "We introduce a novel humanoid robotic platform designed to jointly address three central goals of humanoid robotics: 1) study the role of morphology in biped locomotion; 2) study full-body compliant physical human-robot interaction; 3) be robust while easy and fast to duplicate to facilitate experimentation. The taken approach relies on functional modeling of certain aspects of human morphology, optimizing materials and geometry, as well as on the use of 3D printing techniques. In this article, we focus on the presentation of the design of specific morphological parts related to biped locomotion: the hip, the thigh, the limb mesh and the knee. We present initial experiments showing properties of the robot when walking with the physical guidance of a human.",
"title": ""
},
{
"docid": "122fe53f1e745480837a23b68e62540a",
"text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.",
"title": ""
},
{
"docid": "f060713abe9ada73c1c4521c5ca48ea9",
"text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.",
"title": ""
},
{
"docid": "391949a4c924c9f8e1986e4747e571c4",
"text": "In this paper, we present Auto-Tuned Models, or ATM, a distributed, collaborative, scalable system for automated machine learning. Users of ATM can simply upload a dataset, choose a subset of modeling methods, and choose to use ATM's hybrid Bayesian and multi-armed bandit optimization system. The distributed system works in a load-balanced fashion to quickly deliver results in the form of ready-to-predict models, confusion matrices, cross-validation results, and training timings. By automating hyperparameter tuning and model selection, ATM returns the emphasis of the machine learning workflow to its most irreducible part: feature engineering. We demonstrate the usefulness of ATM on 420 datasets from OpenML and train over 3 million classifiers. Our initial results show ATM can beat human-generated solutions for 30% of the datasets, and can do so in 1/100th of the time.",
"title": ""
},
{
"docid": "861f76c061b9eb52ed5033bdeb9a3ce5",
"text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas",
"title": ""
},
{
"docid": "76984b82e44f5790aa72f03f3804c588",
"text": "LANGUAGE ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements. The system is currently embedded in an application for recommending laptops and was deployed as a pilot on IBM’s web site.",
"title": ""
},
{
"docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251",
"text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.",
"title": ""
},
{
"docid": "8eb62d4fdc1be402cd9216352cb7cfc3",
"text": "In an attempt to better understand generalization in deep learning, we study several possible explanations. We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models. Motivated by this view, we study how different complexity measures can ensure generalization and explain how optimization algorithms can implicitly regularize complexity measures. We empirically investigate the ability of these measures to explain different observed phenomena in deep learning. We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning tasks. Thesis Advisor: Nathan Srebro Title: Professor",
"title": ""
}
] |
scidocsrr
|
35f4230303b83f4c900b204e08f2b72b
|
SERS detection of arsenic in water: A review.
|
[
{
"docid": "908f862dea52cd9341d2127928baa7de",
"text": "Arsenic's history in science, medicine and technology has been overshadowed by its notoriety as a poison in homicides. Arsenic is viewed as being synonymous with toxicity. Dangerous arsenic concentrations in natural waters is now a worldwide problem and often referred to as a 20th-21st century calamity. High arsenic concentrations have been reported recently from the USA, China, Chile, Bangladesh, Taiwan, Mexico, Argentina, Poland, Canada, Hungary, Japan and India. Among 21 countries in different parts of the world affected by groundwater arsenic contamination, the largest population at risk is in Bangladesh followed by West Bengal in India. Existing overviews of arsenic removal include technologies that have traditionally been used (oxidation, precipitation/coagulation/membrane separation) with far less attention paid to adsorption. No previous review is available where readers can get an overview of the sorption capacities of both available and developed sorbents used for arsenic remediation together with the traditional remediation methods. We have incorporated most of the valuable available literature on arsenic remediation by adsorption ( approximately 600 references). Existing purification methods for drinking water; wastewater; industrial effluents, and technological solutions for arsenic have been listed. Arsenic sorption by commercially available carbons and other low-cost adsorbents are surveyed and critically reviewed and their sorption efficiencies are compared. Arsenic adsorption behavior in presence of other impurities has been discussed. Some commercially available adsorbents are also surveyed. An extensive table summarizes the sorption capacities of various adsorbents. Some low-cost adsorbents are superior including treated slags, carbons developed from agricultural waste (char carbons and coconut husk carbons), biosorbents (immobilized biomass, orange juice residue), goethite and some commercial adsorbents, which include resins, gels, silica, treated silica tested for arsenic removal come out to be superior. Immobilized biomass adsorbents offered outstanding performances. Desorption of arsenic followed by regeneration of sorbents has been discussed. Strong acids and bases seem to be the best desorbing agents to produce arsenic concentrates. Arsenic concentrate treatment and disposal obtained is briefly addressed. This issue is very important but much less discussed.",
"title": ""
}
] |
[
{
"docid": "62e900f89427e4b97f64919a3cb0d537",
"text": "This paper introduces the SpamBayes classification engine and outlines the most important features and techniques which contribute to its success. The importance of using the indeterminate ‘unsure’ classification produced by the chi-squared combining technique is explained. It outlines a Robinson/Woodhead/Peters technique of ‘tiling’ unigrams and bigrams to produce better results than relying solely on either or other methods of using both unigrams and bigrams. It discusses methods of training the classifier, and evaluates the success of different methods. The paper focuses on highlighting techniques that might aid other classification systems rather than attempting to demonstrate the effectiveness of the SpamBayes classification engine.",
"title": ""
},
{
"docid": "9a9fd442bc7353d9cd202e9ace6e6580",
"text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.",
"title": ""
},
{
"docid": "a5306ca9a50e82e07d487d1ac7603074",
"text": "Many modern visual recognition algorithms incorporate a step of spatial ‘pooling’, where the outputs of several nearby feature detectors are combined into a local or global ‘bag of features’, in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.",
"title": ""
},
{
"docid": "842e7c5b825669855617133b0067efc9",
"text": "This research proposes a robust method for disc localization and cup segmentation that incorporates masking to avoid misclassifying areas as well as forming the structure of the cup based on edge detection. Our method has been evaluated using two fundus image datasets, namely: D-I and D-II comprising of 60 and 38 images, respectively. The proposed method of disc localization achieves an average Fscore of 0.96 and average boundary distance of 7.7 for D-I, and 0.96 and 9.1, respectively, for D-II. The cup segmentation method attains an average Fscore of 0.88 and average boundary distance of 13.8 for D-I, and 0.85 and 18.0, respectively, for D-II. The estimation errors (mean ± standard deviation) of our method for the value of vertical cup-to-disc diameter ratio against the result of the boundary by the expert of DI and D-II have similar value, namely 0.04 ± 0.04. Overall, the result of ourmethod indicates its robustness for glaucoma evaluation. B Anindita Septiarini anindita.septiarini@gmail.com Agus Harjoko aharjoko@ugm.ac.id Reza Pulungan pulungan@ugm.ac.id Retno Ekantini rekantini@ugm.ac.id 1 Department of Computer Science and Electronics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 2 Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 3 Department of Computer Science, Mulawarman University, Samarinda 75123, Indonesia",
"title": ""
},
{
"docid": "abdd1406266d7290166eb16b8a5045a9",
"text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.",
"title": ""
},
{
"docid": "1a58f72cd0f6e979a72dbc233e8c4d4a",
"text": "The revolution of genome sequencing is continuing after the successful second-generation sequencing (SGS) technology. The third-generation sequencing (TGS) technology, led by Pacific Biosciences (PacBio), is progressing rapidly, moving from a technology once only capable of providing data for small genome analysis, or for performing targeted screening, to one that promises high quality de novo assembly and structural variation detection for human-sized genomes. In 2014, the MinION, the first commercial sequencer using nanopore technology, was released by Oxford Nanopore Technologies (ONT). MinION identifies DNA bases by measuring the changes in electrical conductivity generated as DNA strands pass through a biological pore. Its portability, affordability, and speed in data production makes it suitable for real-time applications, the release of the long read sequencer MinION has thus generated much excitement and interest in the genomics community. While de novo genome assemblies can be cheaply produced from SGS data, assembly continuity is often relatively poor, due to the limited ability of short reads to handle long repeats. Assembly quality can be greatly improved by using TGS long reads, since repetitive regions can be easily expanded into using longer sequencing lengths, despite having higher error rates at the base level. The potential of nanopore sequencing has been demonstrated by various studies in genome surveillance at locations where rapid and reliable sequencing is needed, but where resources are limited.",
"title": ""
},
{
"docid": "8f0630e009fdab34a77db9780850f0f0",
"text": "A wireless power transfer (WPT) using inductive coupling for mobile phone charger is studied. The project is offer to study and fabricate solar WPT using inductive coupling for mobile phone charger that will give more information about distance is effect for WPT performance and WPT is not much influenced by the presence of hands, books and types of plastics. The components used to build wireless power transfer can be divided into 3 parts components, the transceiver for power transmission, the inductive coils in this case as the antenna, receiver and the rectifier which act convert AC to DC. Experiments have been conducted and the wireless power transfer using inductive coupling is suitable to be implemented for mobile phone charger.",
"title": ""
},
{
"docid": "39cb45c62b83a40f8ea42cb872a7aa59",
"text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.",
"title": ""
},
{
"docid": "ca5ad8301e3a37a6d2749bb27ede1d7a",
"text": "Data and connectivity between users form the core of social networks. Every status, post, friendship, tweet, re-tweet, tag or image generates a massive amount of structured and unstructured data. Deriving meaning from this data and, in particular, extracting behavior and emotions of individual users, as well as of user communities, is the goal of sentiment analysis and affective computing and represents a significant challenge. Social networks also represent a potentially infinite source of applications for both research and commercial purposes and are adaptable to many different areas, including life science. Nevertheless, collecting, sharing, storing and analyzing social networks data pose several challenges to computer scientists, such as the management of highly unstructured data, big data, and the need for real-time computation. In this paper we give a brief overview of some concrete examples of applying sentiment analysis to social networks for healthcare purposes, we present the current type of tools existing for sentiment analysis, and summarize the challenges involved in this process focusing on the role of high performance computing.",
"title": ""
},
{
"docid": "24ce878b5cb0c7ff62ea8e29cc7a237c",
"text": "The energy-efficient tracking and precise localization of continuous objects have long been key issues in research on wireless sensor networks (WSNs). Among various techniques, significant results are reported from applying a clustering-based object tracking technique, which benefits the energy-efficient and stable network in large-scale WSNs. As of now, during the consideration of large-scale WSNs, a continuous object is tracked by using a static clustering-based approach. However, due to the restriction of global information sharing among static clusters, tracking at the boundary region is a challenging issue. This paper presents a complete tracking and localization algorithm in WSNs. Considering the limitation of static clusters, an energy-efficient incremental clustering algorithm followed by Gaussian adaptive resonance theory is proposed at the boundary region. The proposed research is allowed to learn, create, update, and retain clusters incrementally through online learning to adapt to incessant motion patterns. Finally, the Trilateration algorithm is applied for the precise localization of dynamic objects throughout the sensor network. The performance of the proposed system is evaluated through simulation results, demonstrating its energy-efficient tracking and stable network.",
"title": ""
},
{
"docid": "54bf53b120f5fa1c0cdfad80e5e264c9",
"text": "To ensure safety in the construction of important metallic components for roadworthiness, it is necessary to check every component thoroughly using non-destructive testing. In last decades, X-ray testing has been adopted as the principal non-destructive testing method to identify defects within a component which are undetectable to the naked eye. Nowadays, modern computer vision techniques, such as deep learning and sparse representations, are opening new avenues in automatic object recognition in optical images. These techniques have been broadly used in object and texture recognition by the computer vision community with promising results in optical images. However, a comprehensive evaluation in X-ray testing is required. In this paper, we release a new dataset containing around 47.500 cropped X-ray images of 32 32 pixels with defects and no-defects in automotive components. Using this dataset, we evaluate and compare 24 computer vision techniques including deep learning, sparse representations, local descriptors and texture features, among others. We show in our experiments that the best performance was achieved by a simple LBP descriptor with a SVM-linear classifier obtaining 97% precision and 94% recall. We believe that the methodology presented could be used in similar projects that have to deal with automated detection of defects.",
"title": ""
},
{
"docid": "5a601e08824185bafeb94ac432b6e92e",
"text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",
"title": ""
},
{
"docid": "f7ed4fb9015dad13d47dec677c469c4b",
"text": "In this paper, a low-cost, power efficient and fast Differential Cascode Voltage-Switch-Logic (DCVSL) based delay cell (named DCVSL-R) is proposed. We use the DCVSL-R cell to implement high frequency and power-critical delay cells and flip-flops of ring oscillators and frequency dividers. When compared to TSPC, DCVSL circuits offer small input and clock capacitance and a symmetric differential loading for previous RF stages. When compared to CML, they offer low transistor count, no headroom limitation, rail-to-rail swing and no static current consumption. However, DCVSL circuits suffer from a large low-to-high propagation delay, which limits their speed and results in asymmetrical output waveforms. The proposed DCVSL-R circuit embodies the benefits of DCVSL while reducing the total propagation delay, achieving faster operation. DCVSL-R also generates symmetrical output waveforms which are critical for differential circuits. Another contribution of this work is a closed-form delay model that predicts the speed of DCVSL circuits with 8% worst case accuracy. We implement two ring-oscillator-based VCOs in 0.13 μm technology with DCVSL and DCVSL-R delay cells. Measurements show that the proposed DCVSL-R based VCO consumes 30% less power than the DCVSL VCO for the same oscillation frequency (2.4 GHz) and same phase noise (-113 dBc/Hz at 10 MHz). DCVSL-R circuits are also used to implement the high frequency dual modulus prescaler (DMP) of a 2.4 GHz frequency synthesizer in 0.18 μm technology. The DMP consumes only 0.8 mW at 2.48 GHz, a 40% reduction in power when compared to other reported DMPs with similar division ratios and operating frequencies. The RF buffer that drives the DMP consumes only 0.27 mW, demonstrating the lowest combined DMP and buffer power consumption among similar synthesizers in literature.",
"title": ""
},
{
"docid": "828c54f29339e86107f1930ae2a5e77f",
"text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "edf8d1bb84c0845dddad417a939e343b",
"text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.",
"title": ""
},
{
"docid": "bf126b871718a5ee09f1e54ea5052d20",
"text": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.",
"title": ""
},
{
"docid": "c6f4ff7072dcb55c0f86e253160479b7",
"text": "In this study we extracted websites' URL features and analyzed subset based feature selection methods and classification algorithms for phishing websites detection.",
"title": ""
},
{
"docid": "6cf4315ecce8a06d9354ca2f2684113c",
"text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.",
"title": ""
},
{
"docid": "505e80ac2fe0ee1a34c60279b90d0ca7",
"text": "In an effective e-learning game, the learner’s enjoyment acts as a catalyst to encourage his/her learning initiative. Therefore, the availability of a scale that effectively measures the enjoyment offered by e-learning games assist the game designer to understanding the strength and flaw of the game efficiently from the learner’s points of view. E-learning games are aimed at the achievement of learning objectives via the creation of a flow effect. Thus, this study is based on Sweetser’s & Wyeth’s framework to develop a more rigorous scale that assesses user enjoyment of e-learning games. The scale developed in the present study consists of eight dimensions: Immersion, social interaction, challenge, goal clarity, feedback, concentration, control, and knowledge improvement. Four learning games employed in a university’s online learning course ‘‘Introduction to Software Application” were used as the instruments of scale verification. Survey questionnaires were distributed to students taking the course and 166 valid samples were subsequently collected. The results showed that the validity and reliability of the scale, EGameFlow, were satisfactory. Thus, the measurement is an effective tool for evaluating the level of enjoyment provided by elearning games to their users. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bb6314a8e6ec728d09aa37bfffe5c835",
"text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.",
"title": ""
}
] |
scidocsrr
|
70eeaacf0ac2e76bb4ba1adfe4684a1a
|
Platform-tolerant PIFA-type UHF RFID tag antenna
|
[
{
"docid": "9cef8aa700cefcbfcc6a79d530987018",
"text": "This paper presents a wideband UHF RFID tag designed for operating on multiple materials including metal. We describe the antenna structure and present the comparison of modeling and simulation results with experimental data.",
"title": ""
}
] |
[
{
"docid": "bc07015b2a2624a75a656ae50d3b4e07",
"text": "Current NAC technologies implement a pre-connect phase whe re t status of a device is checked against a set of policies before being granted access to a network, an d a post-connect phase that examines whether the device complies with the policies that correspond to its rol e in the network. In order to enhance current NAC technologies, we propose a new architecture based on behaviorsrather thanrolesor identity, where the policies are automatically learned and updated over time by the membe rs of the network in order to adapt to behavioral changes of the devices. Behavior profiles may be presented as identity cards that can change over time. By incorporating an Anomaly Detector (AD) to the NAC server or t each of the hosts, their behavior profile is modeled and used to determine the type of behaviors that shou ld be accepted within the network. These models constitute behavior-based policies. In our enhanced NAC ar chitecture, global decisions are made using a group voting process. Each host’s behavior profile is used to compu te a partial decision for or against the acceptance of a new profile or traffic. The aggregation of these partial vote s amounts to the model-group decision. This voting process makes the architecture more resilient to attacks. E ven after accepting a certain percentage of malicious devices, the enhanced NAC is able to compute an adequate deci sion. We provide proof-of-concept experiments of our architecture using web traffic from our department netwo rk. Our results show that the model-group decision approach based on behavior profiles has a 99% detection rate o f nomalous traffic with a false positive rate of only 0.005%. Furthermore, the architecture achieves short latencies for both the preand post-connect phases.",
"title": ""
},
{
"docid": "1514bae0c1b47f5aaf0bfca6a63d9ce9",
"text": "The persistence of racial inequality in the U.S. labor market against a general backdrop of formal equality of opportunity is a troubling phenomenon that has significant ramifications on the design of hiring policies. In this paper, we show that current group disparate outcomes may be immovable even when hiring decisions are bound by an input-output notion of “individual fairness.” Instead, we construct a dynamic reputational model of the labor market that illustrates the reinforcing nature of asymmetric outcomes resulting from groups’ divergent accesses to resources and as a result, investment choices. To address these disparities, we adopt a dual labor market composed of a Temporary Labor Market (TLM), in which firms’ hiring strategies are constrained to ensure statistical parity of workers granted entry into the pipeline, and a Permanent Labor Market (PLM), in which firms hire top performers as desired. Individual worker reputations produce externalities for their group; the corresponding feedback loop raises the collective reputation of the initially disadvantaged group via a TLM fairness intervention that need not be permanent. We show that such a restriction on hiring practices induces an equilibrium that, under particular market conditions, Pareto-dominates those arising from strategies that statistically discriminate or employ a “group-blind” criterion. The enduring nature of equilibria that are both inequitable and Pareto suboptimal suggests that fairness interventions beyond procedural checks of hiring decisions will be of critical importance in a world where machines play a greater role in the employment process. ACM Reference Format: Lily Hu and Yiling Chen. 2018. A Short-term Intervention for Long-term Fairness in the Labor Market. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https: //doi.org/10.1145/3178876.3186044",
"title": ""
},
{
"docid": "d142ad76c2c5bb1565ef539188ce7d43",
"text": "The recent discovery of new classes of small RNAs has opened unknown territories to explore new regulations of physiopathological events. We have recently demonstrated that RNY (or Y RNA)-derived small RNAs (referred to as s-RNYs) are an independent class of clinical biomarkers to detect coronary artery lesions and are associated with atherosclerosis burden. Here, we have studied the role of s-RNYs in human and mouse monocytes/macrophages and have shown that in lipid-laden monocytes/macrophages s-RNY expression is timely correlated to the activation of both NF-κB and caspase 3-dependent cell death pathways. Loss- or gain-of-function experiments demonstrated that s-RNYs activate caspase 3 and NF-κB signaling pathways ultimately promoting cell death and inflammatory responses. As, in atherosclerosis, Ro60-associated s-RNYs generated by apoptotic macrophages are released in the blood of patients, we have investigated the extracellular function of the s-RNY/Ro60 complex. Our data demonstrated that s-RNY/Ro60 complex induces caspase 3-dependent cell death and NF-κB-dependent inflammation, when added to the medium of cultured monocytes/macrophages. Finally, we have shown that s-RNY function is mediated by Toll-like receptor 7 (TLR7). Indeed using chloroquine, which disrupts signaling of endosome-localized TLRs 3, 7, 8 and 9 or the more specific TLR7/9 antagonist, the phosphorothioated oligonucleotide IRS954, we blocked the effect of either intracellular or extracellular s-RNYs. These results position s-RNYs as relevant novel functional molecules that impacts on macrophage physiopathology, indicating their potential role as mediators of inflammatory diseases, such as atherosclerosis.",
"title": ""
},
{
"docid": "5318baa10a6db98a0f31c6c30fdf6104",
"text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.",
"title": ""
},
{
"docid": "2dde6c9387ee0a51220d92a4bc0bb8bf",
"text": "We propose a generic algorithm for computation of similarit y measures for sequential data. The algorithm uses generalized suffix trees f or efficient calculation of various kernel, distance and non-metric similarity func tions. Its worst-case run-time is linear in the length of sequences and independen t of the underlying embedding language, which can cover words, k-grams or all contained subsequences. Experiments with network intrusion detection, DN A analysis and text processing applications demonstrate the utility of distan ces and similarity coefficients for sequences as alternatives to classical kernel fu ctions.",
"title": ""
},
{
"docid": "0ff7f69f341f62711b383699746452fd",
"text": "Dynamic sensitivity control (DSC) is being discussed within the new IEEE 802.11ax task group as one of the potential techniques to improve the system performance for next generation Wi-Fi in high capacity and dense deployment environments, e.g. stadiums, conference venues, shopping malls, etc. However, there appears to be lack of consensus regarding the adoption of DSC within the group. This paper reports on investigations into the performance of the baseline DSC technique proposed in the IEEE 802.11ax task group under realistic scenarios defined by the task group. Simulations were carried out and the results suggest that compared with the default case (no DSC), the use of DSC may lead to mixed results in terms of throughput and fairness with the gain varying depending on factors like inter-AP distance, node distribution, node density and the DSC margin value. Further, we also highlight avenues for mitigating the shortcomings of DSC found in this study.",
"title": ""
},
{
"docid": "2e864dcde57ea1716847f47977af0140",
"text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.",
"title": ""
},
{
"docid": "cac628a1f0727994969c554832f4b7e0",
"text": "We have shown that it is possible to achieve artistic style transfer within a purely image processing paradigm. This is in contrast to previous work that utilized deep neural networks to learn the difference between “style” and “content” in a painting. We leverage the work by Kwatra et. al. on texture synthesis to accomplish “style synthesis” from our given style images, building off the work of Elad and Milanfar. We have also introduced a novel “style fusion” concept that guides the algorithm to follow broader structures of style at a higher level while giving it the freedom to make its own artistic decisions at a smaller scale. Our results are comparable to the neural network approach, while improving speed and maintaining robustness to different styles and contents.",
"title": ""
},
{
"docid": "17106095b19d87ad8883af0606714a07",
"text": "Based on American Customer Satisfaction Index model ACSI and study at home and abroad, a Hotel online booking Consumer Satisfaction model (HECS) is established. After empirically testing the validity of the measurement model and structural model of Hotel online booking Consumer Satisfaction, consumer satisfaction index is calculated. Results show that Website easy usability impacts on customer satisfaction most significantly, followed by responsiveness and reliability of the website. Statistic results also show a medium consumer satisfaction index number. Suggestions are given to improve online booking consumer satisfaction, such as website designing of easier using, timely processing of orders, offering more offline personal support for online service, doing more communication with customers, providing more communication channel and so on.",
"title": ""
},
{
"docid": "fb67e237688deb31bd684c714a49dca5",
"text": "In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.",
"title": ""
},
{
"docid": "f1f281bce1a71c3bce99077e76197560",
"text": "Probabilistic timed automata (PTA) combine discrete probabilistic choice, real time and nondeterminism. This paper presents a fully automatic tool for model checking PTA with respect to probabilistic and expected reachability properties. PTA are specified in Modest, a high-level compositional modelling language that includes features such as exception handling, dynamic parallelism and recursion, and thus enables model specification in a convenient fashion. For model checking, we use an integral semantics of time, representing clocks with bounded integer variables. This makes it possible to use the probabilistic model checker PRISM as analysis backend. We describe details of the approach and its implementation, and report results obtained for three different case studies.",
"title": ""
},
{
"docid": "9c447f9a2b00a2e27433601fce4ab4ce",
"text": "The Hypertext Transfer Protocol (HTTP) has been widely adopted and deployed as the key protocol for video streaming over the Internet. One of the consequences of leveraging traditional HTTP for video streaming is the significantly increased request overhead due to the segmentation of the video content into HTTP resources. The overhead becomes even more significant when non-multiplexed video and audio segments are deployed. In this paper, we investigate and address the request overhead problem by employing the server push technology in the new HTTP 2.0 protocol. In particular, we develop a set of push strategies that actively deliver video and audio content from the HTTP server without requiring a request for each individual segment. We evaluate our approach in a Dynamic Adaptive Streaming over HTTP (DASH) streaming system. We show that the request overhead can be significantly reduced by using our push strategies. Also, we validate that the server push based approach is compatible with the existing HTTP streaming features, such as adaptive bitrate switching.",
"title": ""
},
{
"docid": "2d7a13754631206203d6618ab2a27a76",
"text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.",
"title": ""
},
{
"docid": "4018814855e4cd7232d7c75636a538b8",
"text": "Personalized recommendation of Points of Interest (POIs) plays a key role in satisfying users on Location-Based Social Networks (LBSNs). In this article, we propose a probabilistic model to find the mapping between user-annotated tags and locations’ taste keywords. Furthermore, we introduce a dataset on locations’ contextual appropriateness and demonstrate its usefulness in predicting the contextual relevance of locations. We investigate four approaches to use our proposed mapping for addressing the data sparsity problem: one model to reduce the dimensionality of location taste keywords and three models to predict user tags for a new location. Moreover, we present different scores calculated from multiple LBSNs and show how we incorporate new information from the mapping into a POI recommendation approach. Then, the computed scores are integrated using learning to rank techniques. The experiments on two TREC datasets show the effectiveness of our approach, beating state-of-the-art methods.",
"title": ""
},
{
"docid": "f182fdd2f5bae84b5fc38284f83f0c27",
"text": "We adopted an approach based on an LSTM neural network to monitor and detect faults in industrial multivariate time series data. To validate the approach we created a Modelica model of part of a real gasoil plant. By introducing hacks into the logic of the Modelica model, we were able to generate both the roots and causes of fault behavior in the plant. Having a self-consistent data set with labeled faults, we used an LSTM architecture with a forecasting error threshold to obtain precision and recall quality metrics. The dependency of the quality metric on the threshold level is considered. An appropriate mechanism such as “one handle” was introduced for filtering faults that are outside of the plant operator field of interest.",
"title": ""
},
{
"docid": "d5b20e250e28cae54a7f3c868f342fc5",
"text": "Context: Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems. Objective: This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method: We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and",
"title": ""
},
{
"docid": "d1c0b58fa78ecda169d3972eae870590",
"text": "Power system stability is defined as an ability of the power system to reestablish the initial steady state or come into the new steady state after any variation of the system's operation value or after system´s breakdown. The stability and reliability of the electric power system is highly actual topic nowadays, especially in the light of recent accidents like splitting of UCTE system and north-American blackouts. This paper deals with the potential of the evaluation in term of transient stability of the electric power system within the defense plan and the definition of the basic criterion for the transient stability – Critical Clearing Time (CCT).",
"title": ""
},
{
"docid": "58b2ee3d0a4f61d4db883bc0a896f8f4",
"text": "While applications for mobile devices have become extremely important in the last few years, little public information exists on mobile application usage behavior. We describe a large-scale deployment-based research study that logged detailed application usage information from over 4,100 users of Android-powered mobile devices. We present two types of results from analyzing this data: basic descriptive statistics and contextual descriptive statistics. In the case of the former, we find that the average session with an application lasts less than a minute, even though users spend almost an hour a day using their phones. Our contextual findings include those related to time of day and location. For instance, we show that news applications are most popular in the morning and games are at night, but communication applications dominate through most of the day. We also find that despite the variety of apps available, communication applications are almost always the first used upon a device's waking from sleep. In addition, we discuss the notion of a virtual application sensor, which we used to collect the data.",
"title": ""
},
{
"docid": "3e6df23444ae08f65ded768c5dc8dc9d",
"text": "In this paper, we propose a method for automatically detecting various types of snore sounds using image classification convolutional neural network (CNN) descriptors extracted from audio file spectrograms. The descriptors, denoted as deep spectrum features, are derived from forwarding spectrograms through very deep task-independent pre-trained CNNs. Specifically, activations of fully connected layers from two common image classification CNNs, AlexNet and VGG19, are used as feature vectors. Moreover, we investigate the impact of differing spectrogram colour maps and two CNN architectures on the performance of the system. Results presented indicate that deep spectrum features extracted from the activations of the second fully connected layer of AlexNet using a viridis colour map are well suited to the task. This feature space, when combined with a support vector classifier, outperforms the more conventional knowledge-based features of 6 373 acoustic functionals used in the INTERSPEECH ComParE 2017 Snoring sub-challenge baseline system. In comparison to the baseline, unweighted average recall is increased from 40.6% to 44.8% on the development partition, and from 58.5% to 67.0% on the test partition.",
"title": ""
},
{
"docid": "a482218d67b0df6343f63f6d1b796c8e",
"text": "Decoupling local geometric features from the spatial location of a mesh is crucial for feature-preserving mesh denoising. This paper focuses on first order features, i.e., facet normals, and presents a simple yet effective anisotropic mesh denoising framework via normal field denoising. Unlike previous denoising methods based on normal filtering, which process normals defined on the Gauss sphere, our method considers normals as a surface signal defined over the original mesh. This allows the design of a novel bilateral normal filter that depends on both spatial distance and signal distance. Our bilateral filter is a more natural extension of the elegant bilateral filter for image denoising than those used in previous bilateral mesh denoising methods. Besides applying this bilateral normal filter in a local, iterative scheme, as common in most of previous works, we present for the first time a global, noniterative scheme for an isotropic denoising. We show that the former scheme is faster and more effective for denoising extremely noisy meshes while the latter scheme is more robust to irregular surface sampling. We demonstrate that both our feature-preserving schemes generally produce visually and numerically better denoising results than previous methods, especially at challenging regions with sharp features or irregular sampling.",
"title": ""
}
] |
scidocsrr
|
0d3bc1d1725c9bc96856f0649aae7b7e
|
Deep Learning Face Representation from Predicting 10,000 Classes
|
[
{
"docid": "152e5d8979eb1187e98ecc0424bb1fde",
"text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.",
"title": ""
}
] |
[
{
"docid": "126c3c034bfd1380e0cbd115d07989a2",
"text": "This paper presents a four-pole elliptic tunable combline bandpass filter with center frequency and bandwidth control. The filter is built on a Duroid substrate with εr=10.2 and h=25 mils, and the tuning is done using packaged Schottky diodes. A frequency range of 1.55-2.1 GHz with a 1-dB bandwidth tuning from 40-120 MHz (2.2-8% fractional bandwidth) is demonstrated. A pair of tunable transmission zeroes are synthesized at both passband edges and significantly improve the filter selectivity. The rejection level at both the lower and upper stopbands is >; 50 dB and no spurious response exists close to the passband. The measured third-order intermodulation intercept point (TOI) and 1-dB power compression point at midband (1.85 GHz) and a bandwidth of 110 MHz are >; 14& dBm and 6 dBm, respectively, and are limited by the Schottky diodes. It is believed that this is the first four-pole combline tunable bandpass filter with an elliptic function response and center frequency and bandwidth control. The application areas are in tunable filters for wireless systems and cognitive radios.",
"title": ""
},
{
"docid": "845cce1a45804da160e2a4bed0469638",
"text": "The adoption of game mechanics into serious contexts such as business applications (gamification) is a promising trend to improve the user’s participation and engagement with the software in question and on the job. However, this topic is mainly driven by practitioners. A theoretical model for gamification with appropriate empirical validation is missing. In this paper, we introduce a prototype for gamification using SAP ERP as example. Moreover, we have evaluated the concept within a comprehensive user study with 112 participants based on the technology acceptance model (TAM) using partial least squares (PLS) for analysis. Finally, we show that this gamification approach yields significant improvements in latent variables such as enjoyment, flow or perceived ease of use. Moreover, we outline further research requirements in the domain of gamification.",
"title": ""
},
{
"docid": "553980e1d2432d1d27f84f8edcfc81bc",
"text": "The home of the future should be a smart one, to support us in our daily life. Up to now only a few security incidents in that area are known. Depending on different security analyses, this fact is rather a result of the low spread of Smart Home products than the success of such systems security. Given that Smart Homes become more and more popular, we will consider current incidents and analyses to estimate potential security threats in the future. The definitions of a Smart Home drift widely apart. Thus we first need to define Smart Home for ourselves and additionally provide a way to categorize the big mass of products into smaller groups.",
"title": ""
},
{
"docid": "b59f429192a680c1dc07580d21f9e374",
"text": "Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.",
"title": ""
},
{
"docid": "c2869d1324181e08cc80a9ba069dead8",
"text": "Human identifi cation leads to mutual trust that is essential for the proper functioning of society. We have been identifying fellow humans based on their voice, appearance, or gait for thousands of years. However, a systematic and scientifi c basis for human identifi cation started in the nineteenth century when Alphonse Bertillon (Rhodes and Henry 1956 ) introduced the use of a number of anthropomorphic measurements to identify habitual criminals. The Bertillon system was short-lived: soon after its introduction, the distinctiveness of human fi ngerprints was established. Since the early 1900s, fi ngerprints have been an accepted method in forensic investigations to identify suspects and repeat criminals. Now, virtually all law enforcement agencies worldwide use Automatic Fingerprint Identifi cation Systems (AFIS). With growing concerns about terrorist activities, security breaches, and fi nancial fraud, other physiological and behavioral human characteristics have been used for person identifi cation. These distinctive characteristics, or biometric traits, include features such as face, iris, palmprint, and voice. Biometrics (Jain et al. 2006, 2007 ) is now a mature technology that is widely used in a variety of applications ranging from border crossings (e.g., the US-VISIT program) to visiting Walt Disney Parks.",
"title": ""
},
{
"docid": "1bd1a43a0885f33b7ea9863a656758e4",
"text": "In this paper a semi-supervised deep framework is proposed for the problem of 3D shape inverse rendering from a single 2D input image. The main structure of proposed framework consists of unsupervised pre-trained components which significantly reduce the need to labeled data for training the whole framework. using labeled data has the advantage of achieving to accurate results without the need to predefined assumptions about image formation process. Three main components are used in the proposed network: an encoder which maps 2D input image to a representation space, a 3D decoder which decodes a representation to a 3D structure and a mapping component in order to map 2D to 3D representation. The only part that needs label for training is the mapping part with not too many parameters. The other components in the network can be pre-trained unsupervised using only 2D images or 3D data in each case. The way of reconstructing 3D shapes in the decoder component, inspired by the model based methods for 3D reconstruction, maps a low dimensional representation to 3D shape space with the advantage of extracting the basis vectors of shape space from training data itself and is not restricted to a small set of examples as used in predefined models. Therefore, the proposed framework deals directly with coordinate values of the point cloud representation which leads to achieve dense 3D shapes in the output. The experimental results on several benchmark datasets of objects and human faces and comparing with recent similar methods shows the power of proposed network in recovering more details from single 2D images.",
"title": ""
},
{
"docid": "4bc1a78a3c9749460da218fd9d314e56",
"text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.",
"title": ""
},
{
"docid": "8b73f2f12edde981f4e995380a5b9e0c",
"text": "The detection of acoustic scenes is a challenging problem in which environmental sound events must be detected from a given audio signal. This includes classifying the events as well as estimating their onset and offset times. We approach this problem with a neural network architecture that uses the recently-proposed capsule routing mechanism. A capsule is a group of activation units representing a set of properties for an entity of interest, and the purpose of routing is to identify part-whole relationships between capsules. That is, a capsule in one layer is assumed to belong to a capsule in the layer above in terms of the entity being represented. Using capsule routing, we wish to train a network that can learn global coherence implicitly, thereby improving generalization performance. Our proposed method is evaluated on Task 4 of the DCASE 2017 challenge. Results show that classification performance is state-of-the-art, achieving an F-score of 58.6%. In addition, overfitting is reduced considerably compared to other architectures.",
"title": ""
},
{
"docid": "c56d09b3c08f2cb9cc94ace3733b1c54",
"text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.",
"title": ""
},
{
"docid": "e7ed6060dcae9deea01ec24a999c2563",
"text": "All organizations learn, whether they consciously choose to or not-it is a fundamental requirement for their sustained existence. Some firms deliberately advance organizational learning, developing capabilities that are consistent with their objectives; others make no focused effort and, therefore, acquire habits that are counterproductive. Nonetheless, all organizations learn. But what does it mean that an organization learns? We can think of organizational learning as a metaphor derived from our understanding of individual learning. In fact, organizations ultimately learn via their individual members. Hence, theories of individual learning are crucial for understanding organizational learning. Psychologists have studied individual learning for decades, but they are still far from fully understanding the workings of the human mind. Likewise, the theory of organizational learning is still in its embryonic stage. The purpose of this paper is to build a theory about the process through which individual learning advances organizational learning. To do this, we must address the role of individual learning and memory, differentiate between levels of learning, take into account different organizational types, and specify the transfer mechanism between individual and organizational learning. This transfer is at the heart of organizational learning: the process through which individual learning becomes embedded in an organization's memory and structure. Until now, it has received little attention and is not well understood, although a promising interaction between organization theory and psychology has begun. To contribute to our understanding of the nature of the learning organization, I present a framework that focuses on the crucial link between individual learning and organizational learning. Once we have a clear understanding of this transfer process, we can actively manage the learning process to make it consistent with an organization's goals, vision, and values.",
"title": ""
},
{
"docid": "af6cd7f5448acab7cf569b88eb5b3859",
"text": "Advances in wireless sensor network (WSN) technology has provided the availability of small and low-cost sensor nodes with capability of sensing various types of physical and environmental conditions, data processing, and wireless communication. Variety of sensing capabilities results in profusion of application areas. However, the characteristics of wireless sensor networks require more effective methods for data forwarding and processing. In WSN, the sensor nodes have a limited transmission range, and their processing and storage capabilities as well as their energy resources are also limited. Routing protocols for wireless sensor networks are responsible for maintaining the routes in the network and have to ensure reliable multi-hop communication under these conditions. In this paper, we give a survey of routing protocols for Wireless Sensor Network and compare their strengths and limitations.",
"title": ""
},
{
"docid": "1ade1bea5fece2d1882c6b6fac1ef63e",
"text": "Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue.",
"title": ""
},
{
"docid": "2e9f2a2e9b74c4634087a664a85fef9f",
"text": "Parkinson’s disease (PD) is the second most common neurodegenerative disease, which is characterized by loss of dopaminergic (DA) neurons in the substantia nigra pars compacta and the formation of Lewy bodies and Lewy neurites in surviving DA neurons in most cases. Although the cause of PD is still unclear, the remarkable advances have been made in understanding the possible causative mechanisms of PD pathogenesis. Numerous studies showed that dysfunction of mitochondria may play key roles in DA neuronal loss. Both genetic and environmental factors that are associated with PD contribute to mitochondrial dysfunction and PD pathogenesis. The induction of PD by neurotoxins that inhibit mitochondrial complex I provides direct evidence linking mitochondrial dysfunction to PD. Decrease of mitochondrial complex I activity is present in PD brain and in neurotoxin- or genetic factor-induced PD cellular and animal models. Moreover, PINK1 and parkin, two autosomal recessive PD gene products, have important roles in mitophagy, a cellular process to clear damaged mitochondria. PINK1 activates parkin to ubiquitinate outer mitochondrial membrane proteins to induce a selective degradation of damaged mitochondria by autophagy. In this review, we summarize the factors associated with PD and recent advances in understanding mitochondrial dysfunction in PD.",
"title": ""
},
{
"docid": "8207c9dd4c6cdf75e666a6d982981d07",
"text": "Novelty search is a recently proposed method for evolutionary computation designed to avoid the problem of deception, in which the fitness function guides the search process away from global optima. Novelty search replaces fitness-based selection with novelty-based selection, where novelty is measured by comparing an individual's behavior to that of the current population and an archive of past novel individuals. Though there is substantial evidence that novelty search can overcome the problem of deception, the critical factors in its performance remain poorly understood. This paper helps to bridge this gap by analyzing how the behavior function, which maps each genotype to a behavior, affects performance. We propose the notion of descendant fitness probability (DFP), which describes how likely a genotype's descendants are to have a certain fitness, and formulate two hypotheses about when changes to the behavior function will improve novelty search's performance, based on the effect of those changes on behavior and DFP. Experiments in both artificial and deceptive maze domains provide substantial empirical support for these hypotheses.",
"title": ""
},
{
"docid": "bd3620816c83fae9b4a5c871927f2b73",
"text": "Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy. Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.",
"title": ""
},
{
"docid": "fdd01ae46b9c57eada917a6e74796141",
"text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.",
"title": ""
},
{
"docid": "f58a66f2caf848341b29094e9d3b0e71",
"text": "Since student performance and pass rates in school reflect teaching level of the school and even all education system, it is critical to improve student pass rates and reduce dropout rates. Decision Tree (DT) algorithm and Support Vector Machine (SVM) algorithm in data mining, have been used by researchers to find important student features and predict the student pass rates, however they did not consider the coefficient of initialization, and whether there is a dependency between student features. Therefore, in this study, we propose a new concept: features dependencies, and use the grid search algorithm to optimize DT and SVM, in order to improve the accuracy of the algorithm. Furthermore, we added 10-fold cross-validation to DT and SVM algorithm. The results show the experiment can achieve better results in this work. The purpose of this study is providing assistance to students who have greater difficulties in their studies, and students who are at risk of graduating through data mining techniques.",
"title": ""
},
{
"docid": "113c07908c1f22c7671553c7f28c0b3f",
"text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.",
"title": ""
},
{
"docid": "6bdcd13e63a4f24561f575efcd232dad",
"text": "Men have called me mad,” wrote Edgar Allan Poe, “but the question is not yet settled, whether madness is or is not the loftiest intelligence— whether much that is glorious—whether all that is profound—does not spring from disease of thought—from moods of mind exalted at the expense of the general intellect.” Many people have long shared Poe’s suspicion that genius and insanity are entwined. Indeed, history holds countless examples of “that fine madness.” Scores of influential 18thand 19th-century poets, notably William Blake, Lord Byron and Alfred, Lord Tennyson, wrote about the extreme mood swings they endured. Modern American poets John Berryman, Randall Jarrell, Robert Lowell, Sylvia Plath, Theodore Roethke, Delmore Schwartz and Anne Sexton were all hospitalized for either mania or depression during their lives. And many painters and composers, among them Vincent van Gogh, Georgia O’Keeffe, Charles Mingus and Robert Schumann, have been similarly afflicted. Judging by current diagnostic criteria, it seems that most of these artists—and many others besides—suffered from one of the major mood disorders, namely, manic-depressive illness or major depression. Both are fairly common, very treatable and yet frequently lethal diseases. Major depression induces intense melancholic spells, whereas manic-depression, Manic-Depressive Illness and Creativity",
"title": ""
},
{
"docid": "4825e492dc1b7b645a5b92dde0c766cd",
"text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.",
"title": ""
}
] |
scidocsrr
|
2bcbe92be31315c9fbab39a0684eb566
|
Exploiting Temporal and Social Factors for B2B Marketing Campaign Recommendations
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "0b6846c4dd89be21af70b144c93f7a7b",
"text": "Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.",
"title": ""
},
{
"docid": "51dce19889df3ae51b6c12e3f2a47672",
"text": "Existing recommender systems model user interests and the social influences independently. In reality, user interests may change over time, and as the interests change, new friends may be added while old friends grow apart and the new friendships formed may cause further interests change. This complex interaction requires the joint modeling of user interest and social relationships over time. In this paper, we propose a probabilistic generative model, called Receptiveness over Time Model (RTM), to capture this interaction. We design a Gibbs sampling algorithm to learn the receptiveness and interest distributions among users over time. The results of experiments on a real world dataset demonstrate that RTM-based recommendation outperforms the state-of-the-art recommendation methods. Case studies also show that RTM is able to discover the user interest shift and receptiveness change over time",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
[
{
"docid": "91cb5e59cb11f7d5ba3300cf4f00ff5d",
"text": "Blockchain is a technology uniquely suited to support massive number of transactions and smart contracts within the Internet of Things (IoT) ecosystem, thanks to the decentralized accounting mechanism. In a blockchain network, the states of the accounts are stored and updated by the validator nodes, interconnected in a peer-to-peer fashion. IoT devices are characterized by relatively low computing capabilities and low power consumption, as well as sporadic and low-bandwidth wireless connectivity. An IoT device connects to one or more validator nodes to observe or modify the state of the accounts. In order to interact with the most recent state of accounts, a device needs to be synchronized with the blockchain copy stored by the validator nodes. In this work, we describe general architectures and synchronization protocols that enable synchronization of the IoT endpoints to the blockchain, with different communication costs and security levels. We model and analytically characterize the traffic generated by the synchronization protocols, and also investigate the power consumption and synchronization trade-off via numerical simulations. To the best of our knowledge, this is the first study that rigorously models the role of wireless connectivity in blockchain-powered IoT systems.",
"title": ""
},
{
"docid": "ecc7f7c7c81645e7f2feeb6ac8d8f737",
"text": "Worldwide, there are more than 10 million new cancer cases each year, and cancer is the cause of approximately 12% of all deaths. Given this, a large number of epidemiologic studies have been undertaken to identify potential risk factors for cancer, amongst which the association with trace elements has received considerable attention. Trace elements, such as selenium, zinc, arsenic, cadmium, and nickel, are found naturally in the environment, and human exposure derives from a variety of sources, including air, drinking water, and food. Trace elements are of particular interest given that the levels of exposure to them are potentially modifiable. In this review, we focus largely on the association between each of the trace elements noted above and risk of cancers of the lung, breast, colorectum, prostate, urinary bladder, and stomach. Overall, the evidence currently available appears to support an inverse association between selenium exposure and prostate cancer risk, and possibly also a reduction in risk with respect to lung cancer, although additional prospective studies are needed. There is also limited evidence for an inverse association between zinc and breast cancer, and again, prospective studies are needed to confirm this. Most studies have reported no association between selenium and risk of breast, colorectal, and stomach cancer, and between zinc and prostate cancer risk. There is compelling evidence in support of positive associations between arsenic and risk of both lung and bladder cancers, and between cadmium and lung cancer risk.",
"title": ""
},
{
"docid": "d76246dfee7e2f3813e025ac34ffc354",
"text": "Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of NASA website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. The obtained results of the study will be used in the further development of the web site in order to increase its effectiveness.",
"title": ""
},
{
"docid": "fcceec0849ed7f00a77b45f4297f2218",
"text": "Image retargeting is a process to change the resolution of image while preserve interesting regions and avoid obvious visual distortion. In other words, it focuses on image content more than anything else that applies to filter the useful information for data analysis. Existing approaches may encounter difficulties on the various types of images since most of these approaches only consider 2D features, which are sensitive to the complexity of the contents in images. Researchers are now focusing on the RGB-D information, hoping depth information can help to promote the accuracy. However it is not easy to obtain the RGB-D image we need anywhere and how to utilize depth information is still at the exploration stage. In this paper, instead of using RGB-D data captured by 3D camera, we employ an iterative MRF learning model to predict depth information from a single still image. Then we propose our self-learning 3D saliency model based on the RGB-D data and apply it on the seam carving framework. In seam caving, the self-learning 3D saliency is combined with L1-norm of gradient for better seam searching. Experimental results demonstrate the advantages of our method using RGB-D data in the seam carving framework.",
"title": ""
},
{
"docid": "c158e9421ec0d1265bd625b629e64dc5",
"text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.",
"title": ""
},
{
"docid": "ccd883caf9a4bc10db6ec67d033b22eb",
"text": "In this paper, a quality model for object-oriented software and an automated metric tool, Reconfigurable Automated Metrics for Object-Oriented Software (RAMOOS) are proposed. The quality model is targeted at the maintainability and reusability aspects of software which can be effectively predicted from the source code. RAMOOS assists users in applying customized quality model during the development of software. In the beginning of adopting RAMOOS, a user may need to use his intuition to select or modify a system-recommended metric model to fit his specific software project needs. If the initial metrics do not meet the expectation, the user can retrive the saved intermediate results and perform further modification to the metric model. The verified model can then be applied to future similar projects.",
"title": ""
},
{
"docid": "2282af5c9f4de5e0de2aae14c0a47840",
"text": "The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction",
"title": ""
},
{
"docid": "d69571c1614c3a078d36467d91a09bc6",
"text": "In many species of oviparous reptiles, the first steps of gonadal sex differentiation depend on the incubation temperature of the eggs. Feminization of gonads by exogenous oestrogens at a male-producing temperature and masculinization of gonads by antioestrogens and aromatase inhibitors at a female-producing temperature have irrefutably demonstrated the involvement of oestrogens in ovarian differentiation. Nevertheless, several studies performed on the entire gonad/adrenal/mesonephros complex failed to find differences between male- and female-producing temperatures in oestrogen content, aromatase activity and aromatase gene expression during the thermosensitive period for sex determination. Thus, the key role of aromatase and oestrogens in the first steps of ovarian differentiation has been questioned, and extragonadal organs or tissues, such as adrenal, mesonephros, brain or yolk, were considered as possible targets of temperature and sources of the oestrogens acting on gonadal sex differentiation. In disagreement with this view, experiments and assays carried out on the gonads alone, i.e. separated from the adrenal/mesonephros, provide evidence that the gonads themselves respond to temperature shifts by modifying their sexual differentiation and are the site of aromatase activity and oestrogen synthesis during the thermosensitive period. Oestrogens act locally on both the cortical and the medullary part of the gonad to direct ovarian differentiation. We have concluded that there is no objective reason to search for the implication of other organs in the phenomenon of temperature-dependent sex determination in reptiles. From the comparison with data obtained in other vertebrates, we propose two main directions for future research: to examine how transcription of the aromatase gene is regulated and to identify molecular and cellular targets of oestrogens in gonads during sex differentiation, in species with strict genotypic sex determination and species with temperature-dependent sex determination.",
"title": ""
},
{
"docid": "92963d6a511d5e0a767aa34f8932fe86",
"text": "A 77-GHz transmit-array on dual-layer printed circuit board (PCB) is proposed for automotive radar applications. Coplanar patch unit-cells are etched on opposite sides of the PCB and connected by through-via. The unit-cells are arranged in concentric rings to form the transmit-array for 1-bit in-phase transmission. When combined with four-substrate-integrated waveguide (SIW) slot antennas as the primary feeds, the transmit-array is able to generate four beams with a specific coverage of ±15°. The simulated and measured results of the antenna prototype at 76.5 GHz agree well, with gain greater than 18.5 dBi. The coplanar structure significantly simplifies the transmit-array design and eases the fabrication, in particular, at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "4d56f134c2e2a597948bcf9b1cf37385",
"text": "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.",
"title": ""
},
{
"docid": "137b9760d265304560f1cac14edb7f21",
"text": "Gallstones are solid particles formed from bile in the gall bladder. In this paper, we propose a technique to automatically detect Gallstones in ultrasound images, christened as, Automated Gallstone Segmentation (AGS) Technique. Speckle Noise in the ultrasound image is first suppressed using Anisotropic Diffusion Technique. The edges are then enhanced using Unsharp Filtering. NCUT Segmentation Technique is then put to use to segment the image. Afterwards, edges are detected using Sobel Edge Detection. Further, Edge Thickening Process is used to smoothen the edges and probability maps are generated using Floodfill Technique. Then, the image is scribbled using Automatic Scribbling Technique. Finally, we get the segmented gallstone within the gallbladder using the Closed Form Matting Technique.",
"title": ""
},
{
"docid": "64122833d6fa0347f71a9abff385d569",
"text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič",
"title": ""
},
{
"docid": "6d26012bd529735410477c9f389bbf73",
"text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.",
"title": ""
},
{
"docid": "223d5658dee7ba628b9746937aed9bb3",
"text": "A low-power receiver with a one-tap data and edge decision-feedback equalizer (DFE) and a clock recovery circuit is presented. The receiver employs analog adders for the tap-weight summation in both the data and the edge path to simultaneously optimize both the voltage and timing margins. A switched-capacitor input stage allows the receiver to be fully compatible with near-GND input levels without extra level conversion circuits. Furthermore, the critical path of the DFE is simplified to relax the timing margin. Fabricated in the 65-nm CMOS technology, a prototype DFE receiver shows that the data-path DFE extends the voltage and timing margins from 40 mVpp and 0.3 unit interval (UI), respectively, to 70 mVpp and 0.6 UI, respectively. Likewise, the edge-path equalizer reduces the uncertain sampling region (the edge region), which results in 17% reduction of the recovered clock jitter. The DFE core, including adders and samplers, consumes 1.1 mW from a 1.2-V supply while operating at 6.4 Gb/s.",
"title": ""
},
{
"docid": "42392af599ce65f38748420353afc534",
"text": "An innovative technology for the mass production ofstretchable printed circuit boards (SCBs) will bepresented in this paper. This technology makes itpossible for the first time to really integrate fine pitch,high performance electronic circuits easily into textilesand so may be the building block for a totally newgeneration of wearable electronic systems. Anoverview of the technology will be given andsubsequently a real system using SCB technology ispresented.",
"title": ""
},
{
"docid": "aaa2c8a7367086cd762f52b6a6c30df6",
"text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.",
"title": ""
},
{
"docid": "fb7961117dae98e770e0fe84c33673b9",
"text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).",
"title": ""
},
{
"docid": "a513c25bccbeda0c4314213aea49668a",
"text": "Identity recognition faces several challenges especially in extracting an individual's unique features from biometric modalities and pattern classifications. Electrocardiogram (ECG) waveforms, for instance, have unique identity properties for human recognition, and their signals are not periodic. At present, in order to generate a significant ECG feature set, nonfiducial methodologies based on an autocorrelation (AC) in conjunction with linear dimension reduction methods are used. This paper proposes a new non-fiducial framework for ECG biometric verification using kernel methods to reduce both high autocorrelation vectors' dimensionality and recognition system after denoising signals of 52 subjects with Discrete Wavelet Transform (DWT). The effects of different dimensionality reduction techniques for use in feature extraction were investigated to evaluate verification performance rates of a multi-class Support Vector Machine (SVM) with the One-Against-All (OAA) approach. The experimental results demonstrated higher test recognition rates of Gaussian OAA SVMs on random unknown ECG data sets with the use of the Kernel Principal Component Analysis (KPCA) as compared to the use of the Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA). Keyword: ECG biometric recognition; Non-fiducial feature extraction; Kernel methods; Dimensionality reduction; Gaussian OAA SVM",
"title": ""
},
{
"docid": "2e0e53ff34dccd5412faab5b51a3a2f2",
"text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.",
"title": ""
},
{
"docid": "a752279721e2bf6142a0ca34a1a708f3",
"text": "Zika virus (ZIKV) is a mosquito-borne flavivirus first isolated in Uganda from a sentinel monkey in 1947. Mosquito and sentinel animal surveillance studies have demonstrated that ZIKV is endemic to Africa and Southeast Asia, yet reported human cases are rare, with <10 cases reported in the literature. In June 2007, an epidemic of fever and rash associated with ZIKV was detected in Yap State, Federated States of Micronesia. We report the genetic and serologic properties of the ZIKV associated with this epidemic.",
"title": ""
}
] |
scidocsrr
|
7bd3876d9badd720037ed7ffece74b62
|
ARmatika: 3D game for arithmetic learning with Augmented Reality technology
|
[
{
"docid": "ae4c9e5df340af3bd35ae5490083c72a",
"text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] |
[
{
"docid": "1eafc02a19766817536f3da89230b4cf",
"text": "Basically, Bayesian Belief Networks (BBNs) as probabilistic tools provide suitable facilities for modelling process under uncertainty. A BBN applies a Directed Acyclic Graph (DAG) for encoding relations between all variables in state of problem. Finding the beststructure (structure learning) ofthe DAG is a classic NP-Hard problem in BBNs. In recent years, several algorithms are proposed for this task such as Hill Climbing, Greedy Thick Thinning and K2 search. In this paper, we introduced Simulated Annealing algorithm with complete details as new method for BBNs structure learning. Finally, proposed algorithm compared with other structure learning algorithms based on classification accuracy and construction time on valuable databases. Experimental results of research show that the simulated annealing algorithmis the bestalgorithmfrom the point ofconstructiontime but needs to more attention for classification process.",
"title": ""
},
{
"docid": "82c8a692e3b39e58bd73997b2e922c2c",
"text": "The traditional approaches to building survivable systems assume a framework of absolute trust requiring a provably impenetrable and incorruptible Trusted Computing Base (TCB). Unfortunately, we don’t have TCB’s, and experience suggests that we never will. We must instead concentrate on software systems that can provide useful services even when computational resource are compromised. Such a system will 1) Estimate the degree to which a computational resources may be trusted using models of possible compromises. 2) Recognize that a resource is compromised by relying on a system for long term monitoring and analysis of the computational infrastructure. 3) Engage in self-monitoring, diagnosis and adaptation to best achieve its purposes within the available infrastructure. All this, in turn, depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use in order to achieve the best ratio of expected benefit to risk.",
"title": ""
},
{
"docid": "245204d71a7ba2f56897ccb67f26b595",
"text": "The objective of the study is to describe distinguishing characteristics of commercial sexual exploitation of children/child sex trafficking victims (CSEC) who present for health care in the pediatric setting. This is a retrospective study of patients aged 12-18 years who presented to any of three pediatric emergency departments or one child protection clinic, and who were identified as suspected victims of CSEC. The sample was compared with gender and age-matched patients with allegations of child sexual abuse/sexual assault (CSA) without evidence of CSEC on variables related to demographics, medical and reproductive history, high-risk behavior, injury history and exam findings. There were 84 study participants, 27 in the CSEC group and 57 in the CSA group. Average age was 15.7 years for CSEC patients and 15.2 years for CSA patients; 100% of the CSEC and 94.6% of the CSA patients were female. The two groups significantly differed in 11 evaluated areas with the CSEC patients more likely to have had experiences with violence, substance use, running away from home, and involvement with child protective services and/or law enforcement. CSEC patients also had a longer history of sexual activity. Adolescent CSEC victims differ from sexual abuse victims without evidence of CSEC in their reproductive history, high risk behavior, involvement with authorities, and history of violence.",
"title": ""
},
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
},
{
"docid": "fca196c6900f43cf6fd711f8748c6768",
"text": "The fatigue fracture of structural details subjected to cyclic loads mostly occurs at a critical cross section with stress concentration. The welded joint is particularly dangerous location because of sinergetic harmful effects of stress concentration, tensile residual stresses, deffects, microstructural heterogeneity. Because of these reasons many methods for improving the fatigue resistance of welded joints are developed. Significant increase in fatigue strength and fatigue life was proved and could be attributed to improving weld toe profile, the material microstructure, removing deffects at the weld toe and modifying the original residual stress field. One of the most useful methods to improve fatigue behaviour of welded joints is TIG dressing. The magnitude of the improvement in fatigue performance depends on base material strength, type of welded joint and type of loading. Improvements of the fatigue behaviour of the welded joints in low-carbon structural steel treated by TIG dressing is considered in this paper.",
"title": ""
},
{
"docid": "5b6f55af9994b2c2491344fca573502d",
"text": "From times immemorial, colorants, and flavorings have been used in foods. Color and flavor are the major attributes to the quality of a food product, affecting the appearance and acceptance of the product. As a consequence of the increased demand of natural flavoring and colorant from industries, there is a renewed interest in the research on the composition and recovery of natural food flavors and colors. Over the years, numerous procedures have been proposed for the isolation of aromatic compounds and colors from plant materials. Generally, the methods of extraction followed for aroma and pigment from plant materials are solvent extraction, hydro-distillation, steam distillation, and super critical carbon dioxide extraction. The application of enzymes in the extraction of oil from oil seeds like sunflower, corn, coconut, olives, avocado etc. are reported in literature. There is a great potential for this enzyme-based extraction technology with the selection of appropriate enzymes with optimized operating conditions. Various enzyme combinations are used to loosen the structural integrity of botanical material thereby enhancing the extraction of the desired flavor and color components. Recently enzymes have been used for the extraction of flavor and color from plant materials, as a pre-treatment of the raw material before subjecting the plant material to hydro distillation/solvent extraction. A deep knowledge of enzymes, their mode of action, conditions for optimum activity, and selection of the right type of enzymes are essential to use them effectively for extraction. Although the enzyme hydrolases such as lipases, proteases (chymotrypsin, subtilisin, thermolysin, and papain), esterases use water as a substrate for the reaction, they are also able to accept other nucleophiles such as alcohols, amines, thio-esters, and oximes. Advantages of enzyme-assisted extraction of flavor and color in some of the plant materials in comparison with conventional methods are dealt with in this reveiw.",
"title": ""
},
{
"docid": "46dc94fe4ba164ccf1cb37810112883f",
"text": "The purpose of the study was to test four predictions derived from evolutionary (sexual strategies) theory. The central hypothesis was that men and women possess different emotional mechanisms that motivate and evaluate sexual activities. Consequently, even when women express indifference to emotional involvement and commitment and voluntarily engage in casual sexual relations, their goals, their feelings about the experience, and the associations between their sexual behavior and prospects for long-term investment differ significantly from those of men. Women's sexual behavior is associated with their perception of investment potential: long-term, short-term, and partners' ability and willingness to invest. For men,these associations are weaker or inversed. Regression analyses of survey data from 333 male and 363 female college students revealed the following: Greater permissiveness of sexual attitudes was positively associated with number of sex partners; this association was not moderated by sex of subject (Prediction 1); even when women deliberately engaged in casual sexual relations, thoughts that expressed worry and vulnerability crossed their minds; for females, greater number of partners was associated with increased worry-vulnerability whereas for males the trend was the opposite (Prediction 2); with increasing numbers of sex partners, marital thoughts decreased; this finding was not moderated by sex of subject; this finding did not support Prediction 3; for both males and females, greater number of partners was related to larger numbers of one-night stands, partners foreseen in the next 5 years, and deliberately casual sexual relations. This trend was significantly stronger for males than for females (Prediction 4).",
"title": ""
},
{
"docid": "636f5002b3ced8a541df3e0568604f71",
"text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.",
"title": ""
},
{
"docid": "eb3886f7e212f2921b3333a8e1b7b0ed",
"text": "With the resurgence of head-mounted displays for virtual reality, users need new input devices that can accurately track their hands and fingers in motion. We introduce Finexus, a multipoint tracking system using magnetic field sensing. By instrumenting the fingertips with electromagnets, the system can track fine fingertip movements in real time using only four magnetic sensors. To keep the system robust to noise, we operate each electromagnet at a different frequency and leverage bandpass filters to distinguish signals attributed to individual sensing points. We develop a novel algorithm to efficiently calculate the 3D positions of multiple electromagnets from corresponding field strengths. In our evaluation, we report an average accuracy of 1.33 mm, as compared to results from an optical tracker. Our real-time implementation shows Finexus is applicable to a wide variety of human input tasks, such as writing in the air.",
"title": ""
},
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "c71f3284872169d1f506927000df557b",
"text": "Natural rewards and drugs of abuse can alter dopamine signaling, and ventral tegmental area (VTA) dopaminergic neurons are known to fire action potentials tonically or phasically under different behavioral conditions. However, without technology to control specific neurons with appropriate temporal precision in freely behaving mammals, the causal role of these action potential patterns in driving behavioral changes has been unclear. We used optogenetic tools to selectively stimulate VTA dopaminergic neuron action potential firing in freely behaving mammals. We found that phasic activation of these neurons was sufficient to drive behavioral conditioning and elicited dopamine transients with magnitudes not achieved by longer, lower-frequency spiking. These results demonstrate that phasic dopaminergic activity is sufficient to mediate mammalian behavioral conditioning.",
"title": ""
},
{
"docid": "b1827b03bc37fde80f99b73b6547c454",
"text": "When constructing the model of a word by collecting interval-valued data from a group of individuals, both interpersonal and intrapersonal uncertainties coexist. Similar to the interval type-2 fuzzy set (IT2 FS) used in the enhanced interval approach (EIA), the Cloud model characterized by only three parameters can manage both uncertainties. Thus, based on the Cloud model, this paper proposes a new representation model for a word from interval-valued data. In our proposed method, firstly, the collected data intervals are preprocessed to remove the bad ones. Secondly, the fuzzy statistical method is used to compute the histogram of the surviving intervals. Then, the generated histogram is fitted by a Gaussian curve function. Finally, the fitted results are mapped into the parameters of a Cloud model to obtain the parametric model for a word. Compared with eight or nine parameters needed by an IT2 FS, only three parameters are needed to represent a Cloud model. Therefore, we develop a much more parsimonious parametric model for a word based on the Cloud model. Generally a simpler representation model with less parameters usually means less computations and memory requirements in applications. Moreover, the comparison experiments with the recent EIA show that, our proposed method can not only obtain much thinner footprints of uncertainty (FOUs) but also capture sufficient uncertainties of words. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "d365eceff514375d7ae19f70aec71c08",
"text": "Importance\nSeveral studies now provide evidence of ketamine hydrochloride's ability to produce rapid and robust antidepressant effects in patients with mood and anxiety disorders that were previously resistant to treatment. Despite the relatively small sample sizes, lack of longer-term data on efficacy, and limited data on safety provided by these studies, they have led to increased use of ketamine as an off-label treatment for mood and other psychiatric disorders.\n\n\nObservations\nThis review and consensus statement provides a general overview of the data on the use of ketamine for the treatment of mood disorders and highlights the limitations of the existing knowledge. While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.\n\n\nConclusions and Relevance\nThe suggestions provided are intended to facilitate clinical decision making and encourage an evidence-based approach to using ketamine in the treatment of psychiatric disorders considering the limited information that is currently available. This article provides information on potentially important issues related to the off-label treatment approach that should be considered to help ensure patient safety.",
"title": ""
},
{
"docid": "4620525bfbfd492f469e948b290d73a2",
"text": "This thesis contains the complete end-to-end simulation, development, implementation, and calibration of the wide bandwidth, low-Q, Kiwi-SAS synthetic aperture sonar (SAS). Through the use of a very stable towfish, a new novel wide bandwidth transducer design, and autofocus procedures, high-resolution diffraction limited imagery is produced. As a complete system calibration was performed, this diffraction limited imagery is not only geometrically calibrated, it is also calibrated for target cross-section or target strength estimation. Is is important to note that the diffraction limited images are formed without access to any form of inertial measurement information. Previous investigations applying the synthetic aperture technique to sonar have developed processors based on exact, but inefficient, spatial-temporal domain time-delay and sum beamforming algorithms, or they have performed equivalent operations in the frequency domain using fast-correlation techniques (via the fast Fourier transform (FFT)). In this thesis, the algorithms used in the generation of synthetic aperture radar (SAR) images are derived in their wide bandwidth forms and it is shown that these more efficient algorithms can be used to form diffraction limited SAS images. Several new algorithms are developed; accelerated chirp scaling algorithm represents an efficient method for processing synthetic aperture data, while modified phase gradient autofocus and a low-Q autofocus routine based on prominent point processing are used to focus both simulated and real target data that has been corrupted by known and unknown motion or medium propagation errors.",
"title": ""
},
{
"docid": "260fa16461d510094d810f04c333a220",
"text": "We propose a novel VAE-based deep autoencoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality. Our model, dubbed Relevance-Factor-VAE, leverages the total correlation (TC) in the latent space to achieve the disentanglement goal, but also addresses the key issue of existing approaches which cannot distinguish between meaningful and nuisance factors of latent variation, often the source of considerable degradation in disentanglement performance. We tackle this issue by introducing the so-called relevance indicator variables that can be automatically learned from data, together with the VAE parameters. Our model effectively focuses the TC loss onto the relevant factors only by tolerating large prior KL divergences, a desideratum justified by our semi-parametric theoretical analysis. Using a suite of disentanglement metrics, including a newly proposed one, as well as qualitative evidence, we demonstrate that our model outperforms existing methods across several challenging benchmark datasets.",
"title": ""
},
{
"docid": "4791e1e3ccde1260887d3a80ea4577b6",
"text": "The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.",
"title": ""
},
{
"docid": "d82897a2778b3ef6ddfe062f2c778451",
"text": "Inspired by the recent advances in deep learning, we propose a novel iterative belief propagation-convolutional neural network (BP-CNN) architecture to exploit noise correlation for channel decoding under correlated noise. The standard BP decoder is used to estimate the coded bits, followed by a CNN to remove the estimation errors of the BP decoder and obtain a more accurate estimation of the channel noise. Iterating between BP and CNN will gradually improve the decoding SNR and hence result in better decoding performance. To train a well-behaved CNN model, we define a new loss function which involves not only the accuracy of the noise estimation but also the normality test for the estimation errors, i.e., to measure how likely the estimation errors follow a Gaussian distribution. The introduction of the normality test to the CNN training shapes the residual noise distribution and further reduces the BER of the iterative decoding, compared to using the standard quadratic loss function. We carry out extensive experiments to analyze and verify the proposed framework.",
"title": ""
},
{
"docid": "73aa720bebc5f2fa1930930fb4185490",
"text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.",
"title": ""
},
{
"docid": "d2d39b17b4047dd43e19ac4272b31c7e",
"text": "Lignocellulose is a term for plant materials that are composed of matrices of cellulose, hemicellulose, and lignin. Lignocellulose is a renewable feedstock for many industries. Lignocellulosic materials are used for the production of paper, fuels, and chemicals. Typically, industry focuses on transforming the polysaccharides present in lignocellulose into products resulting in the incomplete use of this resource. The materials that are not completely used make up the underutilized streams of materials that contain cellulose, hemicellulose, and lignin. These underutilized streams have potential for conversion into valuable products. Treatment of these lignocellulosic streams with bacteria, which specifically degrade lignocellulose through the action of enzymes, offers a low-energy and low-cost method for biodegradation and bioconversion. This review describes lignocellulosic streams and summarizes different aspects of biological treatments including the bacteria isolated from lignocellulose-containing environments and enzymes which may be used for bioconversion. The chemicals produced during bioconversion can be used for a variety of products including adhesives, plastics, resins, food additives, and petrochemical replacements.",
"title": ""
}
] |
scidocsrr
|
3d1eb27f60fcf8f1d45261a55471eb48
|
Network Intrusion Detection Using Hybrid Simplified Swarm Optimization and Random Forest Algorithm on Nsl-Kdd Dataset
|
[
{
"docid": "320c7c49dd4341cca532fa02965ef953",
"text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.",
"title": ""
},
{
"docid": "11a2882124e64bd6b2def197d9dc811a",
"text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.",
"title": ""
},
{
"docid": "7b05751aa3257263e7f1a8a6f1e2ff7e",
"text": "Intrusion Detection System (IDS) that turns to be a vital component to secure the network. The lack of regular updation, less capability to detect unknown attacks, high non adaptable false alarm rate, more consumption of network resources etc., makes IDS to compromise. This paper aims to classify the NSL-KDD dataset with respect to their metric data by using the best six data mining classification algorithms like J48, ID3, CART, Bayes Net, Naïve Bayes and SVM to find which algorithm will be able to offer more testing accuracy. NSL-KDD dataset has solved some of the inherent limitations of the available KDD’99 dataset. KeywordsIDS, KDD, Classification Algorithms, PCA etc.",
"title": ""
},
{
"docid": "305efd1823009fe79c9f8ff52ddb5724",
"text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.",
"title": ""
},
{
"docid": "035b2296835a9c4a7805ba446760071e",
"text": "Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network. This paper proposes the development of an Intrusion Detection Program (IDP) which could detect known attack patterns. An IDP does not eliminate the use of any preventive mechanism but it works as the last defensive mechanism in securing the system. Three variants of genetic programming techniques namely Linear Genetic Programming (LGP), Multi-Expression Programming (MEP) and Gene Expression Programming (GEP) were evaluated to design IDP. Several indices are used for comparisons and a detailed analysis of MEP technique is provided. Empirical results reveal that genetic programming technique could play a major role in developing IDP, which are light weight and accurate when compared to some of the conventional intrusion detection systems based on machine learning paradigms.",
"title": ""
}
] |
[
{
"docid": "2518564949f7488a7f01dff74e3b6e2d",
"text": "Although it is commonly believed that women are kinder and more cooperative than men, there is conflicting evidence for this assertion. Current theories of sex differences in social behavior suggest that it may be useful to examine in what situations men and women are likely to differ in cooperation. Here, we derive predictions from both sociocultural and evolutionary perspectives on context-specific sex differences in cooperation, and we conduct a unique meta-analytic study of 272 effect sizes-sampled across 50 years of research-on social dilemmas to examine several potential moderators. The overall average effect size is not statistically different from zero (d = -0.05), suggesting that men and women do not differ in their overall amounts of cooperation. However, the association between sex and cooperation is moderated by several key features of the social context: Male-male interactions are more cooperative than female-female interactions (d = 0.16), yet women cooperate more than men in mixed-sex interactions (d = -0.22). In repeated interactions, men are more cooperative than women. Women were more cooperative than men in larger groups and in more recent studies, but these differences disappeared after statistically controlling for several study characteristics. We discuss these results in the context of both sociocultural and evolutionary theories of sex differences, stress the need for an integrated biosocial approach, and outline directions for future research.",
"title": ""
},
{
"docid": "6d411b994567b18ea8ab9c2b9622e7f5",
"text": "Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so.",
"title": ""
},
{
"docid": "4fa1054bd78a624f68a0f62840542457",
"text": "The ReWalkTM powered exoskeleton assists thoracic level motor complete spinal cord injury patients who are paralyzed to walk again with an independent, functional, upright, reciprocating gait. We completed an evaluation of twelve such individuals with promising results. All subjects met basic criteria to be able to use the ReWalkTM - including items such as sufficient bone mineral density, leg passive range of motion, strength, body size and weight limits. All subjects received approximately the same number of training sessions. However there was a wide distribution in walking ability. Walking velocities ranged from under 0.1m/s to approximately 0.5m/s. This variability was not completely explained by injury level The remaining sources of that variability are not clear at present. This paper reports our preliminary analysis into how the walking kinematics differed across the subjects - as a first step to understand the possible contribution to the velocity range and determine if the subjects who did not walk as well could be taught to improve by mimicking the better walkers.",
"title": ""
},
{
"docid": "cfea41d4bc6580c91ee27201360f8e17",
"text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.",
"title": ""
},
{
"docid": "73b150681d7de50ada8e046a3027085f",
"text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"title": ""
},
{
"docid": "290796519b7757ce7ec0bf4d37290eed",
"text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.",
"title": ""
},
{
"docid": "10a33d5a75419519ce1177f6711b749c",
"text": "Perianal fistulizing Crohn's disease has a major negative effect on patient quality of life and is a predictor of poor long-term outcomes. Factors involved in the pathogenesis of perianal fistulizing Crohn's disease include an increased production of transforming growth factor β, TNF and IL-13 in the inflammatory infiltrate that induce epithelial-to-mesenchymal transition and upregulation of matrix metalloproteinases, leading to tissue remodelling and fistula formation. Care of patients with perianal Crohn's disease requires a multidisciplinary approach. A complete assessment of fistula characteristics is the basis for optimal management and must include the clinical evaluation of fistula openings, endoscopic assessment of the presence of proctitis, and MRI to determine the anatomy of fistula tracts and presence of abscesses. Local injection of mesenchymal stem cells can induce remission in patients not responding to medical therapies, or to avoid the exposure to systemic immunosuppression in patients naive to biologics in the absence of active luminal disease. Surgery is still required in a high proportion of patients and should not be delayed when criteria for drug failure is met. In this Review, we provide an up-to-date overview on the pathogenesis and diagnosis of fistulizing Crohn's disease, as well as therapeutic strategies.",
"title": ""
},
{
"docid": "872f556cb441d9c8976e2bf03ebd62ee",
"text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.",
"title": ""
},
{
"docid": "da4699d1e358bebc822b059b568916a8",
"text": "An InterCloud is an interconnected global “cloud of clouds” that enables each cloud to tap into resources of other clouds. This is the earliest work to devise an agent-based InterCloud economic model for analyzing consumer-to-cloud and cloud-to-cloud interactions. While economic encounters between consumers and cloud providers are modeled as a many-to-many negotiation, economic encounters among clouds are modeled as a coalition game. To bolster many-to-many consumer-to-cloud negotiations, this work devises a novel interaction protocol and a novel negotiation strategy that is characterized by both 1) adaptive concession rate (ACR) and 2) minimally sufficient concession (MSC). Mathematical proofs show that agents adopting the ACR-MSC strategy negotiate optimally because they make minimum amounts of concession. By automatically controlling concession rates, empirical results show that the ACR-MSC strategy is efficient because it achieves significantly higher utilities than the fixed-concession-rate time-dependent strategy. To facilitate the formation of InterCloud coalitions, this work devises a novel four-stage cloud-to-cloud interaction protocol and a set of novel strategies for InterCloud agents. Mathematical proofs show that these InterCloud coalition formation strategies 1) converge to a subgame perfect equilibrium and 2) result in every cloud agent in an InterCloud coalition receiving a payoff that is equal to its Shapley value.",
"title": ""
},
{
"docid": "838bd8a38f9d67d768a34183c72da07d",
"text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.",
"title": ""
},
{
"docid": "d7635b011cef61fe6487a823c0d09301",
"text": "The present letter describes the design of an energy harvesting circuit on a one-sided directional flexible planar antenna. The circuit is composed of a flexible antenna with an impedance matching circuit, a resonant circuit, and a booster circuit for converting and boosting radio frequency power into a dc voltage. The proposed one-sided directional flexible antenna has a bottom floating metal layer that enables one-sided radiation and easy connection of the booster circuit to the metal layer. The simulated output dc voltage is 2.89 V for an input of 100 mV and a 50 Ω power source at 900 MHz, and power efficiency is 58.7% for 1.0 × 107 Ω load resistance.",
"title": ""
},
{
"docid": "57e71550633cdb4a37d3fa270f0ad3a7",
"text": "Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.",
"title": ""
},
{
"docid": "88b0d223ccff042d20148abf79599102",
"text": "Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a trade-off between transfer and interference. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994; Lopez-Paz & Ranzato, 2017). Continual learning assumes that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution. We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). Recently proposed techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by 1 ar X iv :1 81 0. 11 91 0v 1 [ cs .L G ] 2 9 O ct 2 01 8 Published as a conference paper at ICLR 2019 Stability – Plasticity Dilemma Stability – Plasticity Dilemma A. Transfer – Interference Trade-off Transfer Old Learning Current Learning Future Learning Sharing Sharing B. Transfer C. Interference ∂Li ∂θ ∂Lj ∂θ",
"title": ""
},
{
"docid": "0307912d034d4cbfef7cafb79ea9f9b3",
"text": "This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "b3f2c1736174eda75f7eedb3cee2a729",
"text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.",
"title": ""
},
{
"docid": "b4a784bb8eb714afc86f1eee4f0a20ed",
"text": "Warthin tumor (papillary cystadenoma lymphomatosum) is a benign salivary gland tumor involving almost exclusively the parotid gland. The lip is a very unusual location for this type of tumor, which develops only rarely in minor salivary glands. The case of 42-year-old woman with Warthin tumor arising in minor salivary glands of the upper lip is reported.",
"title": ""
},
{
"docid": "86f25f09b801d28ce32f1257a39ddd44",
"text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.",
"title": ""
},
{
"docid": "7e647cac9417bf70acd8c0b4ee0faa9b",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
},
{
"docid": "1347e22f1b3afe4ce6cd40f25770a465",
"text": "Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. They have been extensively used in many important practical scenarios, such as display advertising and content recommendation. A common practice estimates the unknown bandit parameters pertaining to each user independently. This unfortunately ignores dependency among users and thus leads to suboptimal solutions, especially for the applications that have strong social components.\n In this paper, we develop a collaborative contextual bandit algorithm, in which the adjacency graph among users is leveraged to share context and payoffs among neighboring users while online updating. We rigorously prove an improved upper regret bound of the proposed collaborative bandit algorithm comparing to conventional independent bandit algorithms. Extensive experiments on both synthetic and three large-scale real-world datasets verified the improvement of our proposed algorithm against several state-of-the-art contextual bandit algorithms.",
"title": ""
},
{
"docid": "854bd77e534e0bb53953edb708c867b1",
"text": "About 60-GHz millimeter wave (mmWave) unlicensed frequency band is considered as a key enabler for future multi-Gbps WLANs. IEEE 802.11ad (WiGig) standard has been ratified for 60-GHz wireless local area networks (WLANs) by only considering the use case of peer to peer (P2P) communication coordinated by a single WiGig access point (AP). However, due to 60-GHz fragile channel, multiple number of WiGig APs should be installed to fully cover a typical target environment. Nevertheless, the exhaustive search beamforming training and the maximum received power-based autonomous users association prevent WiGig APs from establishing optimal WiGig concurrent links using random access. In this paper, we formulate the problem of WiGig concurrent transmissions in random access scenarios as an optimization problem, and then we propose a greedy scheme based on (2.4/5 GHz) Wi-Fi/(60 GHz) WiGig coordination to find out a suboptimal solution for it. In the proposed WLAN, the wide coverage Wi-Fi band is used to provide the control signalling required for launching the high date rate WiGig concurrent links. Besides, statistical learning using Wi-Fi fingerprinting is utilized to estimate the suboptimal candidate AP along with its suboptimal beam direction for establishing the WiGig concurrent link without causing interference to the existing WiGig data links while maximizing the total system throughput. Numerical analysis confirms the high impact of the proposed Wi-Fi/WiGig coordinated WLAN.",
"title": ""
}
] |
scidocsrr
|
0c56ff755afba097645800990f749c55
|
Design of a Wideband Planar Printed Quasi-Yagi Antenna Using Stepped Connection Structure
|
[
{
"docid": "6661cc34d65bae4b09d7c236d0f5400a",
"text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.",
"title": ""
},
{
"docid": "5f40ac6afd39e3d2fcbc5341bc3af7b4",
"text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.",
"title": ""
}
] |
[
{
"docid": "d84c8302578391c909b2ac261c93c1fb",
"text": "This short communication describes a case of diprosopiasis in Trachemys scripta scripta imported from Florida (USA) and farmed for about 4 months by a private owner in Palermo, Sicily, Italy. The water turtle showed the morphological and radiological features characterizing such deformity. This communication aims to advance the knowledge of the reptile's congenital anomalies and suggests the need for more detailed investigations to better understand its pathogenesis.",
"title": ""
},
{
"docid": "b04ba2e942121b7a32451f0b0f690553",
"text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381",
"title": ""
},
{
"docid": "19bb054fb4c6398df99a84a382354d59",
"text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.",
"title": ""
},
{
"docid": "48c28572e5eafda1598a422fa1256569",
"text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.",
"title": ""
},
{
"docid": "403d54a5672037cb8adb503405845bbd",
"text": "This paper introduces adaptor grammars, a class of probabil istic models of language that generalize probabilistic context-free grammar s (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “ada ptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian mo dels f language using Dirichlet processes and hierarchical Dirichlet proc esses can be written as simple grammars. We present a general-purpose inference al gorithm for adaptor grammars, making it easy to define and use such models, and ill ustrate how several existing nonparametric Bayesian models can be expressed wi thin this framework.",
"title": ""
},
{
"docid": "f5d8c506c9f25bff429cea1ed4c84089",
"text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.",
"title": ""
},
{
"docid": "4249c95fcd869434312524f05c013c55",
"text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.",
"title": ""
},
{
"docid": "e95fa624bb3fd7ea45650213088a43b0",
"text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.",
"title": ""
},
{
"docid": "33817271f39357c4aef254ac96aab480",
"text": "Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).",
"title": ""
},
{
"docid": "7db989219c3c15aa90a86df84b134473",
"text": "INTRODUCTION\nResearch indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.\n\n\nAIMS\nThe aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.\n\n\nMETHODS\nA sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.\n\n\nMAIN OUTCOME MEASURES\nThe association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.\n\n\nRESULTS\nVOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.\n\n\nCONCLUSIONS\nThe findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation.",
"title": ""
},
{
"docid": "28a4fd94ba02c70d6781ae38bf35ca5a",
"text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.",
"title": ""
},
{
"docid": "17c12cc27cd66d0289fe3baa9ab4124d",
"text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.",
"title": ""
},
{
"docid": "59209ea750988390be9b0d0207ec06bd",
"text": "In diesem Kapitel wird Kognitive Modellierung als ein interdisziplinäres Forschungsgebiet vorgestellt, das sich mit der Entwicklung von computerimplementierbaren Modellen beschäftigt, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Nach einem allgemeinen Überblick über Zielsetzungen, Methoden und Vorgehensweisen, die sich auf den Gebieten der kognitiven Psychologie und der Künstlichen Intelligenz entwickelt haben, sowie der Darstellung eines Theorierahmens werden vier Modelle detaillierter besprochen: In einem I>crnmodcll, das in einem Intelligenten Tutoriellen System Anwendung findet und in einem Performanz-Modell der MenschComputer-Interaktion wird menschliches Handlungswissen beschrieben. Die beiden anderen Modelle zum Textverstehen und zur flexiblen Gedächtnisorganisation beziehen sich demgegenüber vor allem auf den Aufbau und Abruf deklarativen Wissens. Abschließend werden die vorgestellten Modelle in die historische Entwicklung eingeordnet. Möglichkeiten und Grenzen der Kognitiven Modellierung werden hinsichtlich interessant erscheinender Weiterentwicklungen diskutiert. 1. Einleitung und Überblick Das Gebiet der Künstlichen Intelligenz wird meist unter Bezugnahme auf ursprünglich nur beim Menschen beobachtetes Verhalten definiert. So wird die Künstliche Intelligenz oder KI als die Erforschung von jenen Verhaltensabläufen verstanden, deren Planung und Durchführung Intelligenz erfordert. Der Begriff Intelligenz wird dabei unter Bezugnahme auf den Menschen vage abgegrenzt |Siekmann_83,Winston_84]. Da auch Teilbereiche der Psychologie, vor allem die Kognitive Psychologie, Intelligenz und Denken untersuchen, könnte man vermuten, daß die KI-Forschung als die jüngere Wissenschaft direkt auf älteren psychologischen Erkenntnissen aufbauen würde. Obwohl K I und kognitive Psychologie einen ähnlichen Gegenstandsbereich erforschen, gibt es jedoch auch vielschichtige Unterschiede zwischen beiden Disziplinen. Daraus läßt sich möglicherweise erklären, daß die beiden Fächer bislang nicht in dem Maß interagiert haben, wie dies wünschenswert wäre. 1.1 Unterschiede zwischen KI und Kognitiver Psychologie Auch wenn keine klare Grenze zwischen den beiden Gebieten gezogen werden kann, so müssen wir doch feststellen, daß K I nicht gleich Kognitiver Psychologie ist. Wichtige Unterschiede bestehen in den primären Forschungszielen und Methoden, sowie in der Interpretation von Computermodellen (computational models). Zielsetzungen und Methoden Während die K I eine Modellierung von Kompetenzen anstrebt, erforscht die Psychologie die Performanz des Menschen. • Die K I sucht nach Verfahren, die zu einem intelligenten Verhalten eines Computers fuhren. Beispielsweise sollte ein Computer natürliche Sprache verstehen, neue Begriffe lernen können oder Expertenverhalten zeigen oder unterstützen. Die K I versucht also, intelligente Systeme zu entwickeln und deckt dabei mögliche Prinzipien von Intelligenz auf, indem sie Datenstrukturen und Algorithmen spezifiziert, die intelligentes Verhalten erwarten lassen. Entscheidend ist dabei, daß eine intelligente Leistung im Sinne eines Turing-Tests erbracht wird: Eine Implementierung des Algorithmus soll für eine Menge spezifizierter Eingaben (z. B . gesprochene Sprache) innerhalb angemessener Zeit die vergleichbare Verarbeitungsleistung erbringen wie der Mensch. Der beobachtete Systemoutput von Mensch und Computer wäre also oberflächlich betrachtet nicht voneinander unterscheidbar [Turing_63]. Ob die dabei im Computer verwendeten Strukturen, Prozesse und Heuristiken denen beim Menschen ähneln, spielt in der K I keine primäre Rolle. • Die Kognitive Psychologie hingegen untersucht eher die internen kognitiven Verarbeitungsprozesse des Menschen. Bei einer psychologischen Theorie sollte also auch das im Modell verwendete Verfahren den Heuristiken entsprechen, die der Mensch verwendet. Beispielsweise wird ein Schachprogramm nicht dadurch zu einem psychologisch adäquaten Modell, daß es die Spielstärke menschlicher Meisterspieler erreicht. Vielmehr sollten bei einem psychologischen Modell auch die Verarbeitungsprozesse von Mensch und Programm übereinstimmen (vgl. dazu [deGroot_66]).Für psychologische Forschungen sind daher empirische und gezielte experimentelle Untersuchungen der menschlichen Kognition von großer Bedeutung. In der K I steht die Entwicklung und Implementierung von Modellen im Vordergrund. Die kognitive Psychologie dagegen betont die Wichtigkeit der empirischen Evaluation von Modellen zur Absicherung von präzisen, allgemeingültigen Aussagen. Wegen dieser verschiedenen Schwerpunkt Setzung und den daraus resultierenden unterschiedlichen Forschungsmethoden ist es für die Forscher der einen Disziplin oft schwierig, den wissenschaftlichen Fortschritt der jeweils anderen Disziplin zu nutzen [Miller_78]. Interpretation von Computermodellen Die K I ist aus der Informatik hervorgegangen. Wie bei der Informatik bestehen auch bei der K I wissenschaftliche Erkenntnisse darin, daß mit ingenieurwissenschaftlichen Verfahren neue Systeme wie Computerhardund -Software konzipiert und erzeugt werden. Die genaue Beschreibung eines so geschaffenen Systems ist für den Informatiker im Prinzip unproblematisch, da er das System selbst entwickelt hat und daher über dessen Bestandteile und Funktionsweisen bestens informiert ist. Darin liegt ein Unterschied zu den empirischen Wissenschaften wie der Physik oder Psychologie. Der Erfahrungswissenschaftler muß Objektbereiche untersuchen, deren Gesetzmäßigkeiten er nie mit letzter Sicherheit feststellen kann. Er m u ß sich daher Theorien oder Modelle über den Untersuchungsgegenstand bilden, die dann empirisch überprüft werden können. Jedoch läßt sich durch eine noch so große Anzahl von Experimenten niemals die Korrektheit eines Modells beweisen [Popper_66]. E in einfaches Beispiel kann diesen Unterschied verdeutlichen. • E in Hardwarespezialist, der einen Personal Computer gebaut hat, weiß, daß die Aussage \"Der Computer ist mit 640 K B Hauptspeicher bestückt\" richtig ist, weil er ihn eben genau so bestückt hat. Dies ist also eine feststehende Tatsache, die keiner weiteren Überprüfung bedarf. • Die Behauptung eines Psychologen, daß der menschliche Kurzzeitoder Arbeitsspeicher eine Kapazität von etwa 7 Einheiten oder Chunks habe, hat jedoch einen ganz anderen Stellenwert. Damit wird keinesfalls eine faktische Behauptung über die Größe von Arealen im menschlichen Gehirn aufgestellt. \"Arbeitsspeicher\" wird hier als theoretischer Term eines Modells verwendet. Mit der Aussage über die Kapazität des Arbeitsspeichers ist gemeint, daß erfahrungsgemäß Modelle, die eine solche Kapazitätsbescfiränkung annehmen, menschliches Verhalten gut beschreiben können. Dadurch wird jedoch nicht ausgeschlossen, daß ein weiteres Experiment Unzulänglichkeiten oder die Inkorrektheit des Modells nachweist. In den Erfahrungswissenscharten werden theoretische Begriffe wie etwa Arbeitsspeicher innerhalb von Computermodellen zur abstrahierten und integrativen Beschreibung von empirischen Erkenntnissen verwendet. Dadurch können beim Menschen zu beobachtende Verhaltensweisen vorhergesagt werden. Aus der Sichtweise der Informatik bezeichnen genau die gleichen Tcrme jedoch tatsächliche Komponenten eines Geräts oder Programms. Diese unterschiedlichen Sichtweisen der gleichen Modelle verbieten einen unkritischen und oberflächlichen Informationstransfer zwischen K I und Kognitiver Psychologie. Aus der Integration der Zielsetzungen und Sichtweisen ergeben sich jedoch auch gerade vielversprechende Erkenntnismöglichkeiten über Intelligenz. Da theoretische wie auch empirische Untersuchungen zum Verständnis der Intelligenz beitragen, können sich die Methoden und Erkenntnisse von beiden Disziplinen (ähnlich wie Mathematik und Physik im Bereich der theoretischen Physik) ergänzen und befruchten. 1.2 Synthese von KI und Kognitiver Psychologie Im Rahmen der Kognitionswissenschaften(cognitive science) tragen viele Disziplinen (z.B. K I , Psychologie, Linguistik, Anthropologie ...) Erkenntnisse über informationsverarbeitende Systeme bei. Die Kognitive Modellierung als ein Teilgebiet von sowohl K I als auch Kognitiver Psychologie befaßt sich mit der Entwicklung von computerimplementierbaren Modellen, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Durch Kognitive Modellierung wird also eine Synthese von K I und psychologischer Forschung angestrebt. E in Computermodell wird zu einem kognitiven Modell, indem Entitätcn des Modells psychologischen Beobachtungen und Erkenntnissen zugeordnet werden. Da ein solches Modell auch den Anspruch erhebt, menschliches Verhalten vorherzusagen, können Kognitive Modelle aufgrund empirischer Untersuchungen weiterentwickelt werden. Die Frage, ob ein KI-Modell als ein kognitives Modell anzusehen ist, kann nicht einfach bejaht oder verneint werden, sondern wird vielmehr durch die Angabe einer Zuordnung von Aspekten der menschlichen Informationsverarbeitung zu Eigenschaften des Computermodells beantwortet.",
"title": ""
},
{
"docid": "2a36a2ab5b0e01da90859179a60cef9a",
"text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).",
"title": ""
},
{
"docid": "598ffff550aa4e3a9ad1d2f5251fc03a",
"text": "The now taken-for-granted notion that data lead to information, which leads to knowledge, which in turn leads to wisdom was first specified in detail by R. L. Ackoff in 1988. The Data-Information-KnowledgeWisdom hierarchy is based on filtration, reduction, and transformation. Besides being causal and hierarchical, the scheme is pyramidal, in that data are plentiful while wisdom is almost nonexistent. Ackoff’s formula linking these terms together this way permits us to ask what the opposite of knowledge is and whether analogous principles of hierarchy, process, and pyramiding apply to it. The inversion of the DataInformation-Knowledge-Wisdom hierarchy produces a series of opposing terms (including misinformation, error, ignorance, and stupidity) but not exactly a chain or a pyramid. Examining the connections between these phenomena contributes to our understanding of the contours and limits of knowledge. This presentation will revisit the Data-Information-Knowledge-Wisdom hierarchy linking these concepts together as stages of a single developmental process, with the aim of building a taxonomy for a postulated opposite of knowledge, which I will call ‘nonknowledge’. Concepts of data, information, knowledge, and wisdom are the building blocks of library and information science. Discussions and definitions of these terms pervade the literature from introductory textbooks to theoretical research articles (see Zins, 2007). Expressions linking some of these concepts predate the development of information science as a field of study (Sharma 2008). But the first to put all the terms into a single formula was Russell Lincoln Ackoff, in 1989. Ackoff posited a hierarchy at the top of which lay wisdom, and below that understanding, knowledge, information, and data, in that order. Furthermore, he wrote that “each of these includes the categories that fall below it,” and estimated that “on average about forty percent of the human mind consists of data, thirty percent information, twenty percent knowledge, ten percent understanding, and virtually no wisdom” (Ackoff, 1989, 3). This phraseology allows us to view his model as a pyramid, and indeed it has been likened to one ever since (Rowley, 2007; see figure 1). (‘Understanding’ is omitted, since subsequent formulations have not picked up on it.) Ackoff was a management consultant and former professor of management science at the Wharton School specializing in operations research and organizational theory. His article formulating what is now commonly called the Data-InformationKnowledge-Wisdom hierarchy (or DIKW for short) was first given in 1988 as a presidential address to the International Society for General Systems Research. This background may help explain his approach. Data in his terms are the product of observations, and are of no value until they are processed into a usable form to become information. Information is contained in answers to questions. Knowledge, the next layer, further refines information by making “possible the transformation of information into instructions. It makes control of a system possible” (Ackoff, 1989, 4), and that enables one to make it work efficiently. A managerial rather than scholarly perspective runs through Ackoff’s entire hierarchy, so that “understanding” for him",
"title": ""
},
{
"docid": "76c7b343d2f03b64146a0d6ed2d60668",
"text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.",
"title": ""
},
{
"docid": "b8d63090ea7d3302c71879ea4d11fde5",
"text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.",
"title": ""
},
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "d89a5b253d188c28aa64facd3fef8b95",
"text": "This paper presents a method for decomposing long, complex consumer health questions. Our approach largely decomposes questions using their syntactic structure, recognizing independent questions embedded in clauses, as well as coordinations and exemplifying phrases. Additionally, we identify elements specific to disease-related consumer health questions, such as the focus disease and background information. To achieve this, our approach combines rank-and-filter machine learning methods with rule-based methods. Our results demonstrate significant improvements over the heuristic methods typically employed for question decomposition that rely only on the syntactic parse tree.",
"title": ""
},
{
"docid": "6d0aba91efbe627d8d98c7f49c34fe3d",
"text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.",
"title": ""
}
] |
scidocsrr
|
f279df399f50407436670d9821df0891
|
Training with Exploration Improves a Greedy Stack LSTM Parser
|
[
{
"docid": "b5f7511566b902bc206228dc3214c211",
"text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.",
"title": ""
}
] |
[
{
"docid": "73270e8140d763510d97f7bd2fdd969e",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "a0db56f55e2d291cb7cf871c064cf693",
"text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.",
"title": ""
},
{
"docid": "fe0120f7d74ad63dbee9c3cd5ff81e6f",
"text": "Background: Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method: In this study we perform a systematic review studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the",
"title": ""
},
{
"docid": "4e8040c9336cf7d847d938b905f8f81d",
"text": "Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.",
"title": ""
},
{
"docid": "f5a934dc200b27747d3452f5a14c24e5",
"text": "Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).",
"title": ""
},
{
"docid": "da986950f6bbad36de5e9cc55d04e798",
"text": "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.",
"title": ""
},
{
"docid": "d1f02e2f57cffbc17387de37506fddc9",
"text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.",
"title": ""
},
{
"docid": "b0b2c4c321b5607cd6ebda817258921d",
"text": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets.",
"title": ""
},
{
"docid": "0f9ef379901c686df08dd0d1bb187e22",
"text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.",
"title": ""
},
{
"docid": "1348ee3316643f4269311b602b71d499",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "49717f07b8b4a3da892c1bb899f7a464",
"text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.",
"title": ""
},
{
"docid": "6421979368a138e4b21ab7d9602325ff",
"text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "3fa30df910c964bb2bf27a885aa59495",
"text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.",
"title": ""
},
{
"docid": "5b07bc318cb0f5dd7424cdcc59290d31",
"text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.",
"title": ""
},
{
"docid": "ae3d959972d673d24e6d0b7a0567323e",
"text": "Traditional data on influenza vaccination has several limitations: high cost, limited coverage of underrepresented groups, and low sensitivity to emerging public health issues. Social media, such as Twitter, provide an alternative way to understand a population’s vaccination-related opinions and behaviors. In this study, we build and employ several natural language classifiers to examine and analyze behavioral patterns regarding influenza vaccination in Twitter across three dimensions: temporality (by week and month), geography (by US region), and demography (by gender). Our best results are highly correlated official government data, with a correlation over 0.90, providing validation of our approach. We then suggest a number of directions for future work.",
"title": ""
},
{
"docid": "ff4c069ab63ced5979cf6718eec30654",
"text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.",
"title": ""
},
{
"docid": "21925b0a193ebb3df25c676d8683d895",
"text": "The use of dialogue systems in vehicles raises the problem of making sure that the dialogue does not distract the driver from the primary task of driving. Earlier studies have indicated that humans are very apt at adapting the dialogue to the traffic situation and the cognitive load of the driver. The goal of this paper is to investigate strategies for interrupting and resuming in, as well as changing topic domain of, spoken human-human in-vehicle dialogue. The results show a large variety of strategies being used, and indicate that the choice of resumption and domain-switching strategy depends partly on the topic domain being resumed, and partly on the role of the speaker (driver or passenger). These results will be used as a basis for the development of dialogue strategies for interruption, resumption and domain-switching in the DICO in-vehicle dialogue system.",
"title": ""
},
{
"docid": "58f1ba92eb199f4d105bf262b30dbbc5",
"text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
}
] |
scidocsrr
|
9964a76f995125776e2fc1a30d248fec
|
The dawn of the liquid biopsy in the fight against cancer
|
[
{
"docid": "aa234355d0b0493e1d8c7a04e7020781",
"text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.",
"title": ""
}
] |
[
{
"docid": "fc9eae18a5a44ee7df22d6c7bdb5a164",
"text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.",
"title": ""
},
{
"docid": "1bfc1972a32222a1b5816bb040040374",
"text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.",
"title": ""
},
{
"docid": "32817233f5aa05036ca292e7b57143fb",
"text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "b72bc9ee1c32ec3d268abd1d3e51db25",
"text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.",
"title": ""
},
{
"docid": "0dd4f05f9bd3d582b9fb9c64f00ed697",
"text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "665fb08aba7cc1a2d6680bccb259396f",
"text": "Sample entropy (SampEn) has been proposed as a method to overcome limitations associated with approximate entropy (ApEn). The initial paper describing the SampEn metric included a characterization study comparing both ApEn and SampEn against theoretical results and concluded that SampEn is both more consistent and agrees more closely with theory for known random processes than ApEn. SampEn has been used in several studies to analyze the regularity of clinical and experimental time series. However, questions regarding how to interpret SampEn in certain clinical situations and its relationship to classical signal parameters remain unanswered. In this paper we report the results of a characterization study intended to provide additional insights regarding the interpretability of SampEn in the context of biomedical signal analysis.",
"title": ""
},
{
"docid": "323d633995296611c903874aefa5cdb7",
"text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.",
"title": ""
},
{
"docid": "ccd356a943f19024478c42b5db191293",
"text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.",
"title": ""
},
{
"docid": "34b7073f947888694053cb421544cb37",
"text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"title": ""
},
{
"docid": "d7a85bedea94e2e70f9ad52c6247f8d3",
"text": "Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.",
"title": ""
},
{
"docid": "8d071dbd68902f3bac18e61caa0828dd",
"text": "This paper demonstrates that it is possible to construct the Stochastic flash ADC using standard digital cells. In order to minimize the analog circuit requirements which cost high, it is appropriate to begin the architecture with highly digital. The proposed Stochastic flash ADC uses a random comparator offset to set the trip points. Since the comparator are no longer sized for small offset, they can be shrunk down into digital cells. Using comparators that are implemented as digital cells produces a large variation of comparator offset. Typically, this is considered a disadvantage, but in our case, this large standard deviation of offset is used to set the input signal range. By designing an ADC that is made up entirely of digital cells, it is natural candidate for a synthesizable ADC. The analog comparator which is used in this ADC is constructed from standard digital NAND gates connected with SR latch to minimize the memory effects. A Wallace tree adder is used to sum the total number of comparator output, since the order of comparator output is random. Thus, all the components including the comparator and Wallace tree adder can be implemented using standard digital cells. [1] INTRODUCTION As CMOS designs are scaled to smaller technology nodes, many benefits arise, as well as challenges. There are benefits in speed and power due to decreased capacitance and lower supply voltage, yet reduction in intrinsic device gain and lower supply voltage make it difficult to migrate previous analog designs to smaller scaled processes. Moreover, as scaling trends continue, the analog portion of a mixed-signal system tends to consume proportionally more power and area and have a higher design cost than the digital counterpart. This tends to increase the overall design cost of the mixed-signal design. Automatically synthesized digital circuits get all the benefits of scaling, but analog circuits get these benefits at a large cost. The most essential component of ADC is the comparator, which translates from the analog world to digital world. Since comparator defines the boundary between analog and digital realms, the flash ADC architecture will be considered, as it places the comparator as close to the analog input signal. Flash ADCs use a reference ladder to generate the comparator trip points that correspond to each digital code. Typically the references are either generated by a resistor ladder or some form of analog interpolation, but the effect is the same: a …",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "95d624c86fcd86377e46738689bb18a8",
"text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "569fed958b7a471e06ce718102687a1e",
"text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.",
"title": ""
},
{
"docid": "81b5379abf3849e1ae4e233fd4955062",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
},
{
"docid": "9c16f3ccaab4e668578e3eda7d452ebd",
"text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.",
"title": ""
}
] |
scidocsrr
|
8b0fb060f28dee6142e3ee5ff28c5578
|
Community Detection in Multi-Dimensional Networks
|
[
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
}
] |
[
{
"docid": "5441d081eabb4ad3d96775183e603b65",
"text": "We give an introduction to computation and logic tailored for algebraists, and use this as a springboard to discuss geometric models of computation and the role of cut-elimination in these models, following Girard's geometry of interaction program. We discuss how to represent programs in the λ-calculus and proofs in linear logic as linear maps between infinite-dimensional vector spaces. The interesting part of this vector space semantics is based on the cofree cocommutative coalgebra of Sweedler [71] and the recent explicit computations of liftings in [62].",
"title": ""
},
{
"docid": "2c28d01814e0732e59d493f0ea2eafcb",
"text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)",
"title": ""
},
{
"docid": "b21ae248eea30b91e41012ab70cb6d81",
"text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.",
"title": ""
},
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "348702d85126ed64ca24bdc62c1146d9",
"text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.",
"title": ""
},
{
"docid": "4bddc7bb7088c01dbc48504656b0f8d4",
"text": "The basic knowledge required to do sentiment analysis of Twitter is discussed in this review paper. Sentiment Analysis can be viewed as field of text mining, natural language processing. Thus we can study sentiment analysis in various aspects. This paper presents levels of sentiment analysis, approaches to do sentiment analysis, methodologies for doing it, and features to be extracted from text and the applications. Twitter is a microblogging service to which if sentiment analysis done one has to follow explicit path. Thus this paper puts overview about tweets extraction, their preprocessing and their sentiment analysis.",
"title": ""
},
{
"docid": "d848a684aeddd5447f17282fdd2efaf0",
"text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix",
"title": ""
},
{
"docid": "b4d7a8b6b24c85af9f62105194087535",
"text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.",
"title": ""
},
{
"docid": "33ad325fc91be339c580581107314146",
"text": "Designing technological systems for personalized education is an iterative and interdisciplinary process that demands a deep understanding of the application domain, the limitations of current methods and technologies, and the computational methods and complexities behind user modeling and adaptation. We present our design process and the Socially Assistive Robot (SAR) tutoring system to support the efforts of educators in teaching number concepts to preschool children. We focus on the computational considerations of designing a SAR system for young children that may later be personalized along multiple dimensions. We conducted an initial data collection to validate that the system is at the proper challenge level for our target population, and discovered promising patterns in participants' learning styles, nonverbal behavior, and performance. We discuss our plans to leverage the data collected to learn and validate a computational, multidimensional model of number concepts learning.",
"title": ""
},
{
"docid": "f25b9147e67bd8051852142ebd82cf20",
"text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.",
"title": ""
},
{
"docid": "a08697b03ca0b8b8ea6e037fdccb8645",
"text": "Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.",
"title": ""
},
{
"docid": "db83931d7fef8174acdb3a1f4ef0d043",
"text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.",
"title": ""
},
{
"docid": "0f969ca56c984eb573a541318884fdaa",
"text": "One of the mechanisms by which the innate immune system senses the invasion of pathogenic microorganisms is through the Toll-like receptors (TLRs), which recognize specific molecular patterns that are present in microbial components. Stimulation of different TLRs induces distinct patterns of gene expression, which not only leads to the activation of innate immunity but also instructs the development of antigen-specific acquired immunity. Here, we review the rapid progress that has recently improved our understanding of the molecular mechanisms that mediate TLR signalling.",
"title": ""
},
{
"docid": "b9261a0d56a6305602ff27da5ec160e8",
"text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.",
"title": ""
},
{
"docid": "60a6c8588c46fa2aa63a3348723f2bb1",
"text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "394c8f7a708d69ca26ab0617ab1530ab",
"text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.",
"title": ""
},
{
"docid": "38e95632ff481471ddf38c12044257df",
"text": "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes 94k images with manually curated boxes from 15k unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we further introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially at no additional memory cost, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data will be released.",
"title": ""
},
{
"docid": "eb0e38817ff491fbe274caf5e7126d2d",
"text": "At the forefront of debates on language are new data demonstrating infants' early acquisition of information about their native language. The data show that infants perceptually \"map\" critical aspects of ambient language in the first year of life before they can speak. Statistical properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in the service of language. Infants' strategies are unexpected and unpredicted by historical views. A new theoretical position has emerged, and six postulates of this position are described.",
"title": ""
},
{
"docid": "1938d1b72bbeec9cb9c2eed3f2c0a19a",
"text": "Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top-Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2a422c6047bca5a997d5c3d0ee080437",
"text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.",
"title": ""
}
] |
scidocsrr
|
8b23a893d4cb1ebc5060bafc3c45d1bd
|
How to Make a Digital Currency on a Blockchain Stable
|
[
{
"docid": "11e19b59fa2df88f3468b4e71aab8cf4",
"text": "Blockchain is a distributed timestamp server technology introduced for realization of Bitcoin, a digital cash system. It has been attracting much attention especially in the areas of financial and legal applications. But such applications would fail if they are designed without knowledge of the fundamental differences in blockchain from existing technology. We show that blockchain is a probabilistic state machine in which participants can never commit on decisions, we also show that this probabilistic nature is necessarily deduced from the condition where the number of participants remains unknown. This work provides useful abstractions to think about blockchain, and raises discussion for promoting the better use of the technology.",
"title": ""
}
] |
[
{
"docid": "9a4a519023175802578dad5864b3dd01",
"text": "The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.",
"title": ""
},
{
"docid": "6cf97825d649a4f7518be9b72ea8f19f",
"text": "This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from $O(1/\\ln(k))$ to $O(1/\\sqrt{k})$, depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.",
"title": ""
},
{
"docid": "4b494016220eb5442642e34c3ed2d720",
"text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.",
"title": ""
},
{
"docid": "97dfc67c63e7e162dd06d5cb2959912a",
"text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.",
"title": ""
},
{
"docid": "3cd383e547b01040261dc1290d87b02e",
"text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.",
"title": ""
},
{
"docid": "62c6050db8e42b1de54f8d1d54fd861f",
"text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.",
"title": ""
},
{
"docid": "d477e2a2678de720c57895bf1d047c4b",
"text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none",
"title": ""
},
{
"docid": "d29eba4f796cb642d64e73b76767e59d",
"text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.",
"title": ""
},
{
"docid": "3f5c761e5c5dbfd5aa1d1d9af736e5fd",
"text": "In this paper, a double L-slot microstrip patch antenna array using Coplanar waveguide feed for Wireless Local Area Network (WLAN) and Worldwide Interoperability for Microwave Access (WiMAX) frequency bands are presented. The proposed antenna is fabricated on Aluminum Nitride Ceramic substrate with dielectric constant 8.8 and thickness of 1.5mm. The key feature of this substrate is that it can withstand in high temperature. The return loss is about -31dB at the operating frequency of 3.6GHz with 50Ω input impedance. The basic parameters of the proposed antenna such as return loss, VSWR, and radiation pattern are simulated using Ansoft HFSS. Simulation results of antenna parameters of single patch and double patch antenna array are analyzed and presented.",
"title": ""
},
{
"docid": "0bd720d912575c0810c65d04f6b1712b",
"text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.",
"title": ""
},
{
"docid": "b2032f8912fac19b18bc5a836c3536e9",
"text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.",
"title": ""
},
{
"docid": "5e64e36e76f4c0577ae3608b6e715a1f",
"text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.",
"title": ""
},
{
"docid": "8a50b086b61e19481cc3dee78a785f09",
"text": "A new approach to the online classification of streaming data is introduced in this paper. It is based on a self-developing (evolving) fuzzy-rule-based (FRB) classifier system of Takagi-Sugeno ( eTS) type. The proposed approach, called eClass (evolving class ifier), includes different architectures and online learning methods. The family of alternative architectures includes: 1) eClass0, with the classifier consequents representing class label and 2) the newly proposed method for regression over the features using a first-order eTS fuzzy classifier, eClass1. An important property of eClass is that it can start learning ldquofrom scratch.rdquo Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for eClass (the number may grow, with new class labels being added by the online learning process). In the event that an initial FRB exists, eClass can evolve/develop it further based on the newly arrived data. The proposed approach addresses the practical problems of the classification of streaming data (video, speech, sensory data generated from robotic, advanced industrial applications, financial and retail chain transactions, intruder detection, etc.). It has been successfully tested on a number of benchmark problems as well as on data from an intrusion detection data stream to produce a comparison with the established approaches. The results demonstrate that a flexible (with evolving structure) FRB classifier can be generated online from streaming data achieving high classification rates and using limited computational resources.",
"title": ""
},
{
"docid": "7ba0a2631c104e80c43aba739567b248",
"text": "We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.",
"title": ""
},
{
"docid": "8f876345827e55e8ff241afa99c6bb70",
"text": "Reef-building corals occur as a range of colour morphs because of varying types and concentrations of pigments within the host tissues, but little is known about their physiological or ecological significance. Here, we examined whether specific host pigments act as an alternative mechanism for photoacclimation in the coral holobiont. We used the coral Montipora monasteriata (Forskål 1775) as a case study because it occurs in multiple colour morphs (tan, blue, brown, green and red) within varying light-habitat distributions. We demonstrated that two of the non-fluorescent host pigments are responsive to changes in external irradiance, with some host pigments up-regulating in response to elevated irradiance. This appeared to facilitate the retention of antennal chlorophyll by endosymbionts and hence, photosynthetic capacity. Specifically, net P(max) Chl a(-1) correlated strongly with the concentration of an orange-absorbing non-fluorescent pigment (CP-580). This had major implications for the energetics of bleached blue-pigmented (CP-580) colonies that maintained net P(max) cm(-2) by increasing P(max) Chl a(-1). The data suggested that blue morphs can bleach, decreasing their symbiont populations by an order of magnitude without compromising symbiont or coral health.",
"title": ""
},
{
"docid": "d01198e88f91a47a1777337d0db41939",
"text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).",
"title": ""
},
{
"docid": "6fd8226482617b0997640b8783ad2445",
"text": "OBJECTIVES\nThis article presents a new tool that helps systematic reviewers to extract and compare implementation data across primary trials. Currently, systematic review guidance does not provide guidelines for the identification and extraction of data related to the implementation of the underlying interventions.\n\n\nSTUDY DESIGN AND SETTING\nA team of systematic reviewers used a multistaged consensus development approach to develop this tool. First, a systematic literature search on the implementation and synthesis of clinical trial evidence was performed. The team then met in a series of subcommittees to develop an initial draft index. Drafts were presented at several research conferences and circulated to methodological experts in various health-related disciplines for feedback. The team systematically recorded, discussed, and incorporated all feedback into further revisions. A penultimate draft was discussed at the 2010 Cochrane-Campbell Collaboration Colloquium to finalize its content.\n\n\nRESULTS\nThe Oxford Implementation Index provides a checklist of implementation data to extract from primary trials. Checklist items are organized into four domains: intervention design, actual delivery by trial practitioners, uptake of the intervention by participants, and contextual factors. Systematic reviewers piloting the index at the Cochrane-Campbell Colloquium reported that the index was helpful for the identification of implementation data.\n\n\nCONCLUSION\nThe Oxford Implementation Index provides a framework to help reviewers assess implementation data across trials. Reviewers can use this tool to identify implementation data, extract relevant information, and compare features of implementation across primary trials in a systematic review. The index is a work-in-progress, and future efforts will focus on refining the index, improving usability, and integrating the index with other guidance on systematic reviewing.",
"title": ""
},
{
"docid": "318938c2dd173a511d03380826d31bd9",
"text": "The theory and construction of the HP-1430A feed-through sampling head are reviewed, and a model for the sampling head is developed from dimensional and electrical measurements in conjunction with electromagnetic, electronic, and network theory. The model was used to predict the sampling-head step response needed for the deconvolution of true input waveforms. The dependence of the sampling-head step response on the sampling diode bias is investigated. Calculations based on the model predict step response transition durations of 27.5 to 30.5 ps for diode reverse bias values of -1.76 to -1.63 V.",
"title": ""
},
{
"docid": "2276f5bd8866d54128bd1782a748eb43",
"text": "8.5 Printing 304 8.5.1 Overview 304 8.5.2 Inks and subtractive color calculations 304 8.5.2.1 Density 305 8.5.3 Continuous tone printing 306 8.5.4 Halftoning 307 8.5.4.1 Traditional halftoning 307 8.5.5 Digital halftoning 308 8.5.5.1 Cluster dot dither 310 8.5.5.2 Bayer dither and void and cluster dither 310 8.5.5.3 Error diffusion 311 8.5.5.4 Color digital halftoning 312 8.5.6 Print characterization 313 8.5.6.1 Transduction: the tone reproduction curve 313 8.6",
"title": ""
},
{
"docid": "93151277f8325a15c569d77dc973c1a8",
"text": "A class of binary quasi-cyclic burst error-correcting codes based upon product codes is studied. An expression for the maximum burst error-correcting capability for each code in the class is given. In certain cases the codes reduce to Gilbert codes, which are cyclic. Often codes exist in the class which have the same block length and number of check bits as the Gilbert codes but correct longer bursts of errors than Gilbert codes. By shortening the codes, it is possible to design codes which achieve the Reiger bound.",
"title": ""
}
] |
scidocsrr
|
9a55767aba9c03100f383feb17188a74
|
Isolated Swiss-Forward Three-Phase Rectifier With Resonant Reset
|
[
{
"docid": "ee6461f83cee5fdf409a130d2cfb1839",
"text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.",
"title": ""
}
] |
[
{
"docid": "fe8f31db9c3e8cbe9d69e146c40abb49",
"text": "BACKGROUND\nRegular physical activity (PA) can be beneficial to pregnant women, however, many women do not adhere to current PA guidelines during the antenatal period. Patient and public involvement is essential when designing antenatal PA interventions in order to uncover the reasons for non-adherence and non-engagement with the behaviour, as well as determining what type of intervention would be acceptable. The aim of this research was to explore women's experiences of PA during a recent pregnancy, understand the barriers and determinants of antenatal PA and explore the acceptability of antenatal walking groups for further development.\n\n\nMETHODS\nSeven focus groups were undertaken with women who had given birth within the past five years. Focus groups were transcribed and analysed using a grounded theory approach. Relevant and related behaviour change techniques (BCTs), which could be applied to future interventions, were identified using the BCT taxonomy.\n\n\nRESULTS\nWomen's opinions and experiences of PA during pregnancy were categorised into biological/physical (including tiredness and morning sickness), psychological (fear of harm to baby and self-confidence) and social/environmental issues (including access to facilities). Although antenatal walking groups did not appear popular, women identified some factors which could encourage attendance (e.g. childcare provision) and some which could discourage attendance (e.g. walking being boring). It was clear that the personality of the walk leader would be extremely important in encouraging women to join a walking group and keep attending. Behaviour change technique categories identified as potential intervention components included social support and comparison of outcomes (e.g. considering pros and cons of behaviour).\n\n\nCONCLUSIONS\nWomen's experiences and views provided a range of considerations for future intervention development, including provision of childcare, involvement of a fun and engaging leader and a range of activities rather than just walking. These experiences and views relate closely to the Health Action Process Model which, along with BCTs, could be used to develop future interventions. The findings of this study emphasise the importance of involving the target population in intervention development and present the theoretical foundation for building an antenatal PA intervention to encourage women to be physically active throughout their pregnancies.",
"title": ""
},
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "d92f9a08b608f895f004e69c7893f2f0",
"text": "Although research has determined that reactive oxygen species (ROS) function as signaling molecules in plant development, the molecular mechanism by which ROS regulate plant growth is not well known. An aba overly sensitive mutant, abo8-1, which is defective in a pentatricopeptide repeat (PPR) protein responsible for the splicing of NAD4 intron 3 in mitochondrial complex I, accumulates more ROS in root tips than the wild type, and the ROS accumulation is further enhanced by ABA treatment. The ABO8 mutation reduces root meristem activity, which can be enhanced by ABA treatment and reversibly recovered by addition of certain concentrations of the reducing agent GSH. As indicated by low ProDR5:GUS expression, auxin accumulation/signaling was reduced in abo8-1. We also found that ABA inhibits the expression of PLETHORA1 (PLT1) and PLT2, and that root growth is more sensitive to ABA in the plt1 and plt2 mutants than in the wild type. The expression of PLT1 and PLT2 is significantly reduced in the abo8-1 mutant. Overexpression of PLT2 in an inducible system can largely rescue root apical meristem (RAM)-defective phenotype of abo8-1 with and without ABA treatment. These results suggest that ABA-promoted ROS in the mitochondria of root tips are important retrograde signals that regulate root meristem activity by controlling auxin accumulation/signaling and PLT expression in Arabidopsis.",
"title": ""
},
{
"docid": "bc272e837f1071fabcc7056134bae784",
"text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.",
"title": ""
},
{
"docid": "f55c9ef1e60afd326bebbb619452fd97",
"text": "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.",
"title": ""
},
{
"docid": "42b6c55e48f58e3e894de84519cb6feb",
"text": "What social value do Likes on Facebook hold? This research examines peopleâs attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which peopleâs friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.",
"title": ""
},
{
"docid": "48fffb441a5e7f304554e6bdef6b659e",
"text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.",
"title": ""
},
{
"docid": "67136c5bd9277e0637393e9a131d7b53",
"text": "BACKGROUND\nSynchronous written conversations (or \"chats\") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions.\n\n\nOBJECTIVE\nThe aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat.\n\n\nMETHODS\nA systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure.\n\n\nRESULTS\nA total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling).\n\n\nCONCLUSIONS\nFeasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.",
"title": ""
},
{
"docid": "8f0b7554ff0d9f6bf0d1cf8579dc2893",
"text": "Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "e45c921effd9b5026f34ff738b63c48c",
"text": "We consider the problem of weakly supervised learning for object localization. Given a collection of images with image-level annotations indicating the presence/absence of an object, our goal is to localize the object in each image. We propose a neural network architecture called the attention network for this problem. Given a set of candidate regions in an image, the attention network first computes an attention score on each candidate region in the image. Then these candidate regions are combined together with their attention scores to form a whole-image feature vector. This feature vector is used for classifying the image. The object localization is implicitly achieved via the attention scores on candidate regions. We demonstrate that our approach achieves superior performance on several benchmark datasets.",
"title": ""
},
{
"docid": "db2553268fc3ccaddc3ec7077514655c",
"text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.",
"title": ""
},
{
"docid": "6e07085f81dc4f6892e0f2aba7a8dcdd",
"text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "82779e315cf982b56ed14396603ae251",
"text": "The selection of drain current, inversion coefficient, and channel length for each MOS device in an analog circuit results in significant tradeoffs in performance. The selection of inversion coefficient, which is a numerical measure of MOS inversion, enables design freely in weak, moderate, and strong inversion and facilitates optimum design. Here, channel width required for layout is easily found and implicitly considered in performance expressions. This paper gives hand expressions motivated by the EKV MOS model and measured data for MOS device performance, inclusive of velocity saturation and other small-geometry effects. A simple spreadsheet tool is then used to predict MOS device performance and map this into complete circuit performance. Tradeoffs and optimization of performance are illustrated by the design of three, 0.18-mum CMOS operational transconductance amplifiers optimized for DC, balanced, and AC performance. Measured performance shows significant tradeoffs in voltage gain, output resistance, transconductance bandwidth, input-referred flicker noise and offset voltage, and layout area.",
"title": ""
},
{
"docid": "b49a8894277278256b6c1430bb4e4a91",
"text": "In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.",
"title": ""
},
{
"docid": "4d5e8e1c8942256088f1c5ef0e122c9f",
"text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b23e141ca479abecab2b00f13141b9b3",
"text": "The prediction of movement time in human-computer interfaces as undertaken using Fitts' law is reviewed. Techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented. Refinements include (1) the Shannon formulation for the index of task difficulty, (2) new interpretations of \"target width\" for twoand three-dimensional tasks, and (3) a technique for normalizing error rates across experimental factors . Finally, a detailed application example is developed showing the potential of Fitts' law to predict and compare the performance of user interfaces before designs are finalized.",
"title": ""
},
{
"docid": "c034cb6e72bc023a60b54d0f8316045a",
"text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s",
"title": ""
}
] |
scidocsrr
|
5dfc521aa0b4e8ca3fe63d828d91068d
|
Parallel Concatenated Trellis Coded Modulation1
|
[
{
"docid": "5ef37c0620e087d3552499e2b9b4fc84",
"text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.",
"title": ""
}
] |
[
{
"docid": "889c8754c97db758b474a6f140b39911",
"text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.",
"title": ""
},
{
"docid": "0aab0c0fa6a1b0f283478b390dece614",
"text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.",
"title": ""
},
{
"docid": "8a564e77710c118e4de86be643b061a6",
"text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.",
"title": ""
},
{
"docid": "f6669d0b53dd0ca789219874d35bf14e",
"text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.",
"title": ""
},
{
"docid": "4030f6e47e7e1519f69ec9335f4f7cf6",
"text": "In this work, we study the problem of scheduling parallelizable jobs online with an objective of minimizing average flow time. Each parallel job is modeled as a DAG where each node is a sequential task and each edge represents dependence between tasks. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting where a scalable algorithm is known. However, the DAG model is more widely used by practitioners, since many jobs generated from parallel programming languages and libraries can be represented in this model. However, little is known for this model in the online setting with multiple jobs. The DAG model and the speed-up curve models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question of whether an online algorithm can be O(1)-competitive with O(1)-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is (1 + ǫ)-speed O( 1 ǫ )-competitive for any ǫ > 0. We further introduce the first greedy algorithm for scheduling parallelizable jobs — our algorithm is a generalization of the shortest jobs first algorithm. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is (2 + ǫ)-speed O( 1 ǫ )competitive for any ǫ > 0. ∗Department of Computer Science and Engineering, Washington University in St. Louis, 1 Brookings Drive, St. Louis, MO 63130. {kunal, li.jing, kefulu, bmoseley}@wustl.edu. B. Moseley and K. Lu work was supported in part by a Google Research Award and a Yahoo Research Award. K. Agrawal and J. Li were supported in part by NSF grants CCF-1150036 and CCF-1340571.",
"title": ""
},
{
"docid": "13748d365584ef2e680affb67cfcc882",
"text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.",
"title": ""
},
{
"docid": "a40fab738589a9efbf3f87b6c7668601",
"text": "AUTOSAR supports the re-use of software and hardware components of automotive electronic systems. Therefore, amongst other things, AUTOSAR defines a software architecture that is used to decouple software components from hardware devices. This paper gives an overview about the different layers of that architecture. In addition, the upper most layer that concerns the application specific part of automotive electronic systems is presented.",
"title": ""
},
{
"docid": "c7a32821699ebafadb4c59e99fb3aa9e",
"text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.",
"title": ""
},
{
"docid": "9841b00b0fe5b9c7112a2e98553b61b0",
"text": "The market of converters connected to transmission lines continues to require insulated gate bipolar transistors (IGBTs) with higher blocking voltages to reduce the number of IGBTs connected in series in high-voltage converters. To cope with these demands, semiconductor manufactures have developed several technologies. Nowadays, IGBTs up to 6.5-kV blocking voltage and IEGTs up to 4.5-kV blocking voltage are on the market. However, these IGBTs and injection-enhanced gate transistors (IEGTs) still have very high switching losses compared to low-voltage devices, leading to a realistic switching frequency of up to 1 kHz. To reduce switching losses in high-power applications, the auxiliary resonant commutated pole inverter (ARCPI) is a possible alternative. In this paper, switching losses and on-state voltages of NPT-IGBT (3.3 kV-1200 A), FS-IGBT (6.5 kV-600 A), SPT-IGBT (2.5 kV-1200 A, 3.3 kV-1200 A and 6.5 kV-600 A) and IEGT (3.3 kV-1200 A) are measured under hard-switching and zero-voltage switching (ZVS) conditions. The aim of this selection is to evaluate the impact of ZVS on various devices of the same voltage ranges. In addition, the difference in ZVS effects among the devices with various blocking voltage levels is evaluated.",
"title": ""
},
{
"docid": "be96da6d7a1e8348366b497f160c674e",
"text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.",
"title": ""
},
{
"docid": "7442f94af36f6d317291da814e7f3676",
"text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.",
"title": ""
},
{
"docid": "33126812301dfc04b475ecbc9c8ae422",
"text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.",
"title": ""
},
{
"docid": "6cf048863ed227ea7d2188ec6b8ee107",
"text": "Lane keeping is an important feature for self-driving cars. This paper presents an end-to-end learning approach to obtain the proper steering angle to maintain the car in the lane. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using the comma.ai dataset, which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning and steering control, the end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data. Further discussion of this end-to-end approach and its limitation are also provided.",
"title": ""
},
{
"docid": "333645d1c405ae51aafe2b236c8fa3fd",
"text": "Proposes a new method of personal recognition based on footprints. In this method, an input pair of raw footprints is normalized, both in direction and in position for robustness image-matching between the input pair of footprints and the pair of registered footprints. In addition to the Euclidean distance between them, the geometric information of the input footprint is used prior to the normalization, i.e., directional and positional information. In the experiment, the pressure distribution of the footprint was measured with a pressure-sensing mat. Ten volunteers contributed footprints for testing the proposed method. The recognition rate was 30.45% without any normalization (i.e., raw image), and 85.00% with the authors' method.",
"title": ""
},
{
"docid": "c117bb1f7a25c44cbd0d75b7376022f6",
"text": "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples withnoisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets.",
"title": ""
},
{
"docid": "f97086d856ebb2f1c5e4167f725b5890",
"text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.",
"title": ""
},
{
"docid": "1bfab561c8391dad6f0493fa7614feba",
"text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in",
"title": ""
},
{
"docid": "5e8fbfec1ff5bf432dbaadaf13c9ca75",
"text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.",
"title": ""
},
{
"docid": "648cc09e715d3a5bdc84a908f96c95d2",
"text": "With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.",
"title": ""
},
{
"docid": "a45dbfbea6ff33d920781c07dac0442b",
"text": "Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.",
"title": ""
}
] |
scidocsrr
|
df6567247f9e63497797c4b6703b9f8b
|
Task Scheduling and Server Provisioning for Energy-Efficient Cloud-Computing Data Centers
|
[
{
"docid": "95c41c6f901685490c912a2630c04345",
"text": "Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circumstances cloud computing can consume more energy than conventional computing where each user performs all computing on their own personal computer (PC).",
"title": ""
}
] |
[
{
"docid": "cf14e5e501cc4e5e3e97561c4932ae8f",
"text": "Plug-and-play information technology (IT) infrastructure has been expanding very rapidly in recent years. With the advent of cloud computing, many ecosystem and business paradigms are encountering potential changes and may be able to eliminate their IT infrastructure maintenance processes. Real-time performance and high availability requirements have induced telecom networks to adopt the new concepts of the cloud model: software-defined networking (SDN) and network function virtualization (NFV). NFV introduces and deploys new network functions in an open and standardized IT environment, while SDN aims to transform the way networks function. SDN and NFV are complementary technologies; they do not depend on each other. However, both concepts can be merged and have the potential to mitigate the challenges of legacy networks. In this paper, our aim is to describe the benefits of using SDN in a multitude of environments such as in data centers, data center networks, and Network as Service offerings. We also present the various challenges facing SDN, from scalability to reliability and security concerns, and discuss existing solutions to these challenges. Keywords—Software-Defined Networking, OpenFlow, Datacenters, Network as a Service, Network Function Virtualization.",
"title": ""
},
{
"docid": "3ff82fc754526e7a0255959e4b3f6301",
"text": "We propose a novel statistical analysis method for functional magnetic resonance imaging (fMRI) to overcome the drawbacks of conventional data-driven methods such as the independent component analysis (ICA). Although ICA has been broadly applied to fMRI due to its capacity to separate spatially or temporally independent components, the assumption of independence has been challenged by recent studies showing that ICA does not guarantee independence of simultaneously occurring distinct activity patterns in the brain. Instead, sparsity of the signal has been shown to be more promising. This coincides with biological findings such as sparse coding in V1 simple cells, electrophysiological experiment results in the human medial temporal lobe, etc. The main contribution of this paper is, therefore, a new data driven fMRI analysis that is derived solely based upon the sparsity of the signals. A compressed sensing based data-driven sparse generalized linear model is proposed that enables estimation of spatially adaptive design matrix as well as sparse signal components that represent synchronous, functionally organized and integrated neural hemodynamics. Furthermore, a minimum description length (MDL)-based model order selection rule is shown to be essential in selecting unknown sparsity level for sparse dictionary learning. Using simulation and real fMRI experiments, we show that the proposed method can adapt individual variation better compared to the conventional ICA methods.",
"title": ""
},
{
"docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8",
"text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.",
"title": ""
},
{
"docid": "3bf0cead54473e6b118ab8835995bc5f",
"text": "A compact printed microstrip-fed monopole ultrawideband antenna with triple notched bands is presented and analyzed in detail. A straight, open-ended quarter-wavelength slot is etched in the radiating patch to create the first notched band in 3.3-3.7 GHz for the WiMAX system. In addition, three semicircular half-wavelength slots are cut in the radiating patch to generate the second and third notched bands in 5.15-5.825 GHz for WLAN and 7.25-7.75 GHz for downlink of X-band satellite communication systems. Surface current distributions and transmission line models are used to analyze the effect of these slots. The antenna is successfully fabricated and measured, showing broad band matched impedance and good omnidirectional radiation pattern. The designed antenna has a compact size of 25 × 29 mm2.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "859c6f75ac740e311da5e68fcd093531",
"text": "PURPOSE\nTo understand the effect of socioeconomic status (SES) on the risk of complications in type 1 diabetes (T1D), we explored the relationship between SES and major diabetes complications in a prospective, observational T1D cohort study.\n\n\nMETHODS\nComplete data were available for 317 T1D persons within 4 years of age 28 (ages 24-32) in the Pittsburgh Epidemiology of Diabetes Complications Study. Age 28 was selected to maximize income, education, and occupation potential and to minimize the effect of advanced diabetes complications on SES.\n\n\nRESULTS\nThe incidences over 1 to 20 years' follow-up of end-stage renal disease and coronary artery disease were two to three times greater for T1D individuals without, compared with those with a college degree (p < .05 for both), whereas the incidence of autonomic neuropathy was significantly greater for low-income and/or nonprofessional participants (p < .05 for both). HbA(1c) was inversely associated only with income level. In sex- and diabetes duration-adjusted Cox models, lower education predicted end-stage renal disease (hazard ratio [HR], 2.9; 95% confidence interval [95% CI], 1.1-7.7) and coronary artery disease (HR, 2.5, 95% CI, 1.3-4.9), whereas lower income predicted autonomic neuropathy (HR, 1.7; 95% CI, 1.0-2.9) and lower-extremity arterial disease (HR, 3.7; 95% CI, 1.1-11.9).\n\n\nCONCLUSIONS\nThese associations, partially mediated by clinical risk factors, suggest that lower SES T1D individuals may have poorer self-management and, thus, greater complications from diabetes.",
"title": ""
},
{
"docid": "62e445cabbb5c79375f35d7b93f9a30d",
"text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.",
"title": ""
},
{
"docid": "4f23f9ddf35f6e2f7f5ecfcdf28edcea",
"text": "OBJECTIVE\nWe quantified the range of motion (ROM) required for eight upper-extremity activities of daily living (ADLs) in healthy participants.\n\n\nMETHOD\nFifteen right-handed participants completed several bimanual and unilateral basic ADLs while joint kinematics were monitored using a motion capture system. Peak motions of the pelvis, trunk, shoulder, elbow, and wrist were quantified for each task.\n\n\nRESULTS\nTo complete all activities tested, participants needed a minimum ROM of -65°/0°/105° for humeral plane angle (horizontal abduction-adduction), 0°-108° for humeral elevation, -55°/0°/79° for humeral rotation, 0°-121° for elbow flexion, -53°/0°/13° for forearm rotation, -40°/0°/38° for wrist flexion-extension, and -28°/0°/38° for wrist ulnar-radial deviation. Peak trunk ROM was 23° lean, 32° axial rotation, and 59° flexion-extension.\n\n\nCONCLUSION\nFull upper-limb kinematics were calculated for several ADLs. This methodology can be used in future studies as a basis for developing normative databases of upper-extremity motions and evaluating pathology in populations.",
"title": ""
},
{
"docid": "a3ef868300a3c036c2f8802aa6a3793d",
"text": "This paper presents a manifesto directed at developers and designers of internet-of-things creation platforms. Currently, most existing creation platforms are tailored to specific types of end-users, mostly people with a substantial background in or affinity with technology. The thirteen items presented in the manifesto however, resulted from several user studies including non-technical users, and highlight aspects that should be taken into account in order to open up internet-of-things creation to a wider audience. To reach out and involve more people in internet-of-things creation, a relation is made to the social phenomenon of do-it-yourself, which provides valuable insights into how society can be encouraged to get involved in creation activities. Most importantly, the manifesto aims at providing a framework for do-it-yourself systems enabling non-technical users to create internet-of-things applications.",
"title": ""
},
{
"docid": "5d5c3c8cc8344a8c5d18313bec9adb04",
"text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.",
"title": ""
},
{
"docid": "03dc2c32044a41715991d900bb7ec783",
"text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.",
"title": ""
},
{
"docid": "af3af0a4102ea0fb555cad52e4cafa50",
"text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.",
"title": ""
},
{
"docid": "bb240f2e536e5e5cd80fcca8c9d98171",
"text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.",
"title": ""
},
{
"docid": "7a82c189c756e9199ae0d394ed9ade7f",
"text": "Since the late 1970s, globalization has become a phenomenon that has elicited polarizing responses from scholars, politicians, activists, and the business community. Several scholars and activists, such as labor unions, see globalization as an anti-democratic movement that would weaken the nation-state in favor of the great powers. There is no doubt that globalization, no matter how it is defined, is here to stay, and is causing major changes on the globe. Given the rapid proliferation of advances in technology, communication, means of production, and transportation, globalization is a challenge to health and well-being worldwide. On an international level, the average human lifespan is increasing primarily due to advances in medicine and technology. The trends are a reflection of increasing health care demands along with the technological advances needed to prevent, diagnose, and treat disease (IOM, 1997). Along with this increase in longevity comes the concern of finding commonalities in the treatment of health disparities for all people. In a seminal work by Friedman (2005), it is posited that the connecting of knowledge into a global network will result in eradication of most of the healthcare translational barriers we face today. Since healthcare is a knowledge-driven profession, it is reasonable to presume that global healthcare will become more than just a buzzword. This chapter looks at all aspects or components of globalization but focuses specifically on how the movement impacts the health of the people and the nations of the world. The authors propose to use the concept of health as a measuring stick of the claims made on behalf of globalization.",
"title": ""
},
{
"docid": "e8e2cd6e4aacbf1427a50e009bfa35cf",
"text": "We present a model that, after learning on observations of (sequence, outcome) pairs, can be efficiently used to revise a new sequence in order to improve its associated outcome. Our framework requires neither example improvements, nor additional evaluation of outcomes for proposed revisions. To avoid combinatorial-search over sequence elements, we specify a generative model with continuous latent factors, which is learned via joint approximate inference using a recurrent variational autoencoder (VAE) and an outcome-predicting neural network module. Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes. By appropriately constraining this optimization and using the VAE decoder to generate a revised sequence, we ensure the revision is fundamentally similar to the original sequence, is associated with better outcomes, and looks natural. These desiderata are proven to hold with high probability under our approach, which is empirically demonstrated for revising natural language sentences. Introduction The success of recurrent neural network (RNN) models in complex tasks like machine translation and audio synthesis has inspired immense interest in learning from sequence data (Eck & Schmidhuber, 2002; Graves, 2013; Sutskever et al., 2014; Karpathy, 2015). Comprised of elements s t P S , which are typically symbols from a discrete vocabulary, a sequence x “ ps1, . . . , sT q P X has length T which can vary between different instances. Sentences are a popular example of such data, where each s j is a word from the language. In many domains, only a tiny fraction of X (the set of possible sequences over a given vocabulary) represents sequences likely to be found in nature (ie. MIT Computer Science & Artificial Intelligence Laboratory. Correspondence to: J. Mueller <jonasmueller@csail.mit.edu>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). those which appear realistic). For example: a random sequence of words will almost never form a coherent sentence that reads naturally, and a random amino-acid sequence is highly unlikely to specify a biologically active protein. In this work, we consider applications where each sequence x is associated with a corresponding outcome y P R. For example: a news article title or Twitter post can be associated with the number of shares it subsequently received online, or the amino-acid sequence of a synthetic protein can be associated with its clinical efficacy. We operate under the standard supervised learning setting, assuming availability of a dataset D",
"title": ""
},
{
"docid": "e7ad934ea591d5b4a6899b5eb2fa1cb3",
"text": "Increases in the size of the pupil of the eye have been found to accompany the viewing of emotionally toned or interesting visual stimuli. A technique for recording such changes has been developed, and preliminary results with cats and human beings are reported with attention being given to differences between the sexes in response to particular types of material.",
"title": ""
},
{
"docid": "a64f1bb761ac8ee302a278df03eecaa8",
"text": "We analyze StirTrace towards benchmarking face morphing forgeries and extending it by additional scaling functions for the face biometrics scenario. We benchmark a Benford's law based multi-compression-anomaly detection approach and acceptance rates of morphs for a face matcher to determine the impact of the processing on the quality of the forgeries. We use 2 different approaches for automatically creating 3940 images of morphed faces. Based on this data set, 86614 images are created using StirTrace. A manual selection of 183 high quality morphs is used to derive tendencies based on the subjective forgery quality. Our results show that the anomaly detection seems to be able to detect anomalies in the morphing regions, the multi-compression-anomaly detection performance after the processing can be differentiated into good (e.g. cropping), partially critical (e.g. rotation) and critical results (e.g. additive noise). The influence of the processing on the biometric matcher is marginal.",
"title": ""
},
{
"docid": "9e0a28a8205120128938b52ba8321561",
"text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.",
"title": ""
},
{
"docid": "4b7714c60749a2f945f21ca3d6d367fe",
"text": "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.",
"title": ""
}
] |
scidocsrr
|
c857af66e1ebadea18b3b07de5b0400a
|
A Parallel Method for Earth Mover's Distance
|
[
{
"docid": "872a79a47e6a4d83e7440ea5e7126dee",
"text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.",
"title": ""
}
] |
[
{
"docid": "ed530d8481bbfd81da4bdf5d611ad4a4",
"text": "Traumatic coma was produced in 45 monkeys by accelerating the head without impact in one of three directions. The duration of coma, degree of neurological impairment, and amount of diffuse axonal injury (DAI) in the brain were directly related to the amount of coronal head motion used. Coma of less than 15 minutes (concussion) occurred in 11 of 13 animals subjected to sagittal head motion, in 2 of 6 animals with oblique head motion, and in 2 of 26 animals with full lateral head motion. All 15 concussioned animals had good recovery, and none had DAI. Conversely, coma lasting more than 6 hours occurred in one of the sagittal or oblique injury groups but was present in 20 of the laterally injured animals, all of which were severely disabled afterward. All laterally injured animals had a degree of DAI similar to that found in severe human head injury. Coma lasting 16 minutes to 6 hours occurred in 2 of 13 of the sagittal group, 4 of 6 in the oblique group, and 4 of 26 in the lateral group, these animals had less neurological disability and less DAI than when coma lasted longer than 6 hours. These experimental findings duplicate the spectrum of traumatic coma seen in human beings and include axonal damage identical to that seen in sever head injury in humans. Since the amount of DAI was directly proportional to the severity of injury (duration of coma and quality of outcome), we conclude that axonal damage produced by coronal head acceleration is a major cause of prolonged traumatic coma and its sequelae.",
"title": ""
},
{
"docid": "84af7a01dc5486c800f1cf94832ac5a8",
"text": "A technique intended to increase the diversity order of bit-interleaved coded modulations (BICM) over non Gaussian channels is presented. It introduces simple modifications to the mapper and to the corresponding demapper. They consist of a constellation rotation coupled with signal space component interleaving. Iterative processing at the receiver side can provide additional improvement to the BICM performance. This method has been shown to perform well over fading channels with or without erasures. It has been adopted for the 4-, 16-, 64- and 256-QAM constellations considered in the DVB-T2 standard. Resulting gains can vary from 0.2 dB to several dBs depending on the order of the constellation, the coding rate and the channel model.",
"title": ""
},
{
"docid": "9d45323cd4550075d4c2569065ae583c",
"text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "e96c9bdd3f5e9710f7264cbbe02738a7",
"text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.",
"title": ""
},
{
"docid": "640f9ca0bec934786b49f7217e65780b",
"text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.",
"title": ""
},
{
"docid": "934bdd758626ec37241cffba8e2cbeb9",
"text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.",
"title": ""
},
{
"docid": "f670bd1ad43f256d5f02039ab200e1e8",
"text": "This article addresses the performance of distributed database systems. Specifically, we present an algorithm for dynamic replication of an object in distributed systems. The algorithm is adaptive in the sence that it changes the replication scheme of the object i.e., the set of processors at which the object inreplicated) as changes occur in the read-write patern of the object (i.e., the number of reads and writes issued by each processor). The algorithm continuously moves the replication scheme towards an optimal one. We show that the algorithm can be combined with the concurrency control and recovery mechanisms of ta distributed database management system. The performance of the algorithm is analyzed theoretically and experimentally. On the way we provide a lower bound on the performance of any dynamic replication algorith.",
"title": ""
},
{
"docid": "45b90a55678a022f6c3f128d0dc7d1bf",
"text": "Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. 6",
"title": ""
},
{
"docid": "b7521521277f944a9532dc4435a2bda7",
"text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "e7686824a9449bf793554fcf78b66c0e",
"text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.",
"title": ""
},
{
"docid": "c78ebe9d42163142379557068b652a9c",
"text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.",
"title": ""
},
{
"docid": "9530749d15f1f3493f920b84e6e8cebd",
"text": "The view that humans comprise only two types of beings, women and men, a framework that is sometimes referred to as the \"gender binary,\" played a profound role in shaping the history of psychological science. In recent years, serious challenges to the gender binary have arisen from both academic research and social activism. This review describes 5 sets of empirical findings, spanning multiple disciplines, that fundamentally undermine the gender binary. These sources of evidence include neuroscience findings that refute sexual dimorphism of the human brain; behavioral neuroendocrinology findings that challenge the notion of genetically fixed, nonoverlapping, sexually dimorphic hormonal systems; psychological findings that highlight the similarities between men and women; psychological research on transgender and nonbinary individuals' identities and experiences; and developmental research suggesting that the tendency to view gender/sex as a meaningful, binary category is culturally determined and malleable. Costs associated with reliance on the gender binary and recommendations for future research, as well as clinical practice, are outlined. (PsycINFO Database Record",
"title": ""
},
{
"docid": "8c679f94e31dc89787ccff8e79e624b5",
"text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.",
"title": ""
},
{
"docid": "99cb4f69fb7b6ff16c9bffacd7a42f4d",
"text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.",
"title": ""
},
{
"docid": "ef9650746ac9ab803b2a3bbdd5493fee",
"text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.",
"title": ""
},
{
"docid": "ab572c22a75656c19e50b311eb4985ec",
"text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.",
"title": ""
},
{
"docid": "1de46f2eee8db2fad444faa6fbba4d1c",
"text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.",
"title": ""
},
{
"docid": "cb2f5ac9292df37860b02313293d2f04",
"text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a",
"title": ""
}
] |
scidocsrr
|
57efca4f00bb10f737800d3d006c3ce9
|
Real-Time Data Analytics in Sensor Networks
|
[
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
}
] |
[
{
"docid": "a17bf7467da65eede493d543a335c9ae",
"text": "Recently interest has grown in applying activity theory, the leading theoretical approach in Russian psychology, to issues of human-computer interaction. This chapter analyzes why experts in the field are looking for an alternative to the currently dominant cognitive approach. The basic principles of activity theory are presented and their implications for human-computer interaction are discussed. The chapter concludes with an outline of the potential impact of activity theory on studies and design of computer use in real-life settings.",
"title": ""
},
{
"docid": "18140fdf4629a1c7528dcd6060f427c3",
"text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.",
"title": ""
},
{
"docid": "f1d00811120f666763e56e33ad2c3b10",
"text": "Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.",
"title": ""
},
{
"docid": "fa0c62b91643a45a5eff7c1b1fa918f1",
"text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.",
"title": ""
},
{
"docid": "be9d13a24f41eadc0a1d15d99e594b55",
"text": "Traditionally, mobile robot design is based on wheels, tracks or legs with their respective advantages and disadvantages. Very few groups have explored designs with spherical morphology. During the past ten years, the number of robots with spherical shape and related studies has substantially increased, and a lot of work is done in this area of mobile robotics. Interest in robots with spherical morphology has also increased, in part due to NASA's search for an alternative design for a Mars rover since the wheel-based rover Spirit is now stuck for good in soft soil. This paper presents the spherical amphibious robot Groundbot, developed by Rotundus AB in Stockholm, Sweden, and describes in detail the navigation algorithm employed in this system.",
"title": ""
},
{
"docid": "c1477b801a49df62eb978b537fd3935e",
"text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.",
"title": ""
},
{
"docid": "84f688155a92ed2196974d24b8e27134",
"text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the",
"title": ""
},
{
"docid": "a7bd8b02d7a46e6b96223122f673a222",
"text": "This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.",
"title": ""
},
{
"docid": "1eb4805e6874ea1882a995d0f1861b80",
"text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.",
"title": ""
},
{
"docid": "76ecd4ba20333333af4d09b894ff29fc",
"text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.",
"title": ""
},
{
"docid": "e5f5aa53a90f482fb46a7f02bae27b20",
"text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.",
"title": ""
},
{
"docid": "240c47d27533069f339d8eb090a637a9",
"text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "eacf295c0cbd52599a1567c6d4193007",
"text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.",
"title": ""
},
{
"docid": "47c88bb234a6e21e8037a67e6dd2444f",
"text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.",
"title": ""
},
{
"docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1",
"text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7c54cef80d345cdb10f56ca440f5fad9",
"text": "SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)",
"title": ""
},
{
"docid": "d37d6139ced4c85ff0cbc4cce018212b",
"text": "We describe isone, a tool that facilitates the visual exploration of social networks. Social network analysis is a methodological approach in the social sciences using graph-theoretic concepts to describe, understand and explain social structure. The isone software is an attempt to integrate analysis and visualization of social networks and is intended to be used in research and teaching. While we are primarily focussing on users in the social sciences, several features provided in the tool will be useful in other fields as well. In contrast to more conventional mathematical software in the social sciences that aim at providing a comprehensive suite of analytical options, our emphasis is on complementing every option we provide with tailored means of graphical interaction. We attempt to make complicated types of analysis and data handling transparent, intuitive, and more readily accessible. User feedback indicates that many who usually regard data exploration and analysis complicated and unnerving enjoy the playful nature of visual interaction. Consequently, much of the tool is about graph drawing methods specifically adapted to facilitate visual data exploration. The origins of isone lie in an interdisciplinary cooperation with researchers from political science which resulted in innovative uses of graph drawing methods for social network visualization, and prototypical implementations thereof. With the growing demand for access to these methods, we started implementing an integrated tool for public use. It should be stressed, however, that isone remains a research platform and testbed for innovative methods, and is not intended to become",
"title": ""
},
{
"docid": "742c0b15f6a466bfb4e5130b49f79e64",
"text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"title": ""
},
{
"docid": "c4aafcc0a98882de931713359e55a04a",
"text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
}
] |
scidocsrr
|
a56f23de3827e0be9e6269cbd25ac03e
|
Wideband, Low-Profile Patch Array Antenna With Corporate Stacked Microstrip and Substrate Integrated Waveguide Feeding Structure
|
[
{
"docid": "50bd58b07a2cf7bf51ff291b17988a2c",
"text": "A wideband linearly polarized antenna element with complementary sources is proposed and exploited for array antennas. The element covers a bandwidth of 38.7% from 50 to 74 GHz with an average gain of 8.7 dBi. The four-way broad wall coupler is applied for the 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> 2 subarray, which suppresses the cross-polarization of a single element. Based on the designed 2 <inline-formula> <tex-math notation=\"LaTeX\">$ \\times $ </tex-math></inline-formula> 2 subarray, two larger arrays have been designed and measured. The <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> array exhibits 26.7% bandwidth, fully covering the 57–71 GHz unlicensed band. The <inline-formula> <tex-math notation=\"LaTeX\">$8 \\times 8$ </tex-math></inline-formula> array antenna covers a bandwidth of 14.5 GHz (22.9%) from 56.1 to 70.6 GHz with a peak gain of 26.7 dBi, and the radiation efficiency is around 80% within the matching band. It is demonstrated that the proposed antenna element and arrays can be used for future 5G applications to cover the 22% bandwidth of the unlicensed band with high gain and low loss.",
"title": ""
}
] |
[
{
"docid": "45079629c4bc09cc8680b3d9ac325112",
"text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.",
"title": ""
},
{
"docid": "678df42df19aa5a15ede86b4a19c49c4",
"text": "This paper presents the fundamentals of Origami engineering and its application in nowadays as well as future industry. Several main cores of mathematical approaches such as HuzitaHatori axioms, Maekawa and Kawasaki’s theorems are introduced briefly. Meanwhile flaps and circle packing by Robert Lang is explained to make understood the underlying principles in designing crease pattern. Rigid origami and its corrugation patterns which are potentially applicable for creating transformable or temporary spaces is discussed to show the transition of origami from paper to thick material. Moreover, some innovative applications of origami such as eyeglass, origami stent and high tech origami based on mentioned theories and principles are showcased in section III; while some updated origami technology such as Vacuumatics, self-folding of polymer sheets and programmable matter folding which could greatlyenhance origami structureare demonstrated in Section IV to offer more insight in future origami. Keywords—Origami, origami application, origami engineering, origami technology, rigid origami.",
"title": ""
},
{
"docid": "690544595e0fa2e5f1c40e3187598263",
"text": "In this paper, a methodology is presented and employed for simulating the Internet of Things (IoT). The requirement for scalability, due to the possibly huge amount of involved sensors and devices, and the heterogeneous scenarios that might occur, impose resorting to sophisticated modeling and simulation techniques. In particular, multi-level simulation is regarded as a main framework that allows simulating large-scale IoT environments while keeping high levels of detail, when it is needed. We consider a use case based on the deployment of smart services in decentralized territories. A two level simulator is employed, which is based on a coarse agent-based, adaptive parallel and distributed simulation approach to model the general life of simulated entities. However, when needed a finer grained simulator (based on OMNeT++) is triggered on a restricted portion of the simulated area, which allows considering all issues concerned with wireless communications. Based on this use case, it is confirmed that the ad-hoc wireless networking technologies do represent a principle tool to deploy smart services over decentralized countrysides. Moreover, the performance evaluation confirms the viability of utilizing multi-level simulation for simulating large scale IoT environments.",
"title": ""
},
{
"docid": "162823edcbd50579a1d386f88931d59d",
"text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.",
"title": ""
},
{
"docid": "450aee5811484932e8542eb4f0eefa4d",
"text": "Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human–human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.",
"title": ""
},
{
"docid": "96344ccc2aac1a7e7fbab96c1355fa10",
"text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.",
"title": ""
},
{
"docid": "8129b5aae31133afbb8a145d4ac131fc",
"text": "Community health workers (CHWs) are promoted as a mechanism to increase community involvement in health promotion efforts, despite little consensus about the role and its effectiveness. This article reviews the databased literature on CHW effectiveness, which indicates preliminary support for CHWs in increasing access to care, particularly in underserved populations. There are a smaller number of studies documenting outcomes in the areas of increased health knowledge, improved health status outcomes, and behavioral changes, with inconclusive results. Although CHWs show some promise as an intervention, the role can be doomed by overly high expectations, lack of a clear focus, and lack of documentation. Further research is required with an emphasis on stronger study design, documentation of CHW activities, and carefully defined target populations.",
"title": ""
},
{
"docid": "31404322fb03246ba2efe451191e29fa",
"text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.",
"title": ""
},
{
"docid": "26bd615c16b99e84b787b573d6028878",
"text": "Extendible hashing is a new access technique, in which the user is guaranteed no more than two page faults to locate the data associated with a given unique identifier, or key. Unlike conventional hashing, extendible hashing has a dynamic structure that grows and shrinks gracefully as the database grows and shrinks. This approach simultaneously solves the problem of making hash tables that are extendible and of making radix search trees that are balanced. We study, by analysis and simulation, the performance of extendible hashing. The results indicate that extendible hashing provides an attractive alternative to other access methods, such as balanced trees.",
"title": ""
},
{
"docid": "c4e8dbd875e35e5bd9bd55ca24cdbfc2",
"text": "In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publiclyheld beliefs – known as discourse commitments – that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80% of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources.",
"title": ""
},
{
"docid": "e4069b8312b8a273743b31b12b1dfbae",
"text": "Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "d97b2b028fbfe0658e841954958aac06",
"text": "Videogame control interfaces continue to evolve beyond their traditional roots, with devices encouraging more natural forms of interaction growing in number and pervasiveness. Yet little is known about their true potential for intuitive use. This paper proposes methods to leverage existing intuitive interaction theory for games research, specifically by examining different types of naturally mapped control interfaces for videogames using new measures for previous player experience. Three commercial control devices for a racing game were categorised using an existing typology, according to how the interface maps physical control inputs with the virtual gameplay actions. The devices were then used in a within-groups (n=64) experimental design aimed at measuring differences in intuitive use outcomes. Results from mixed design ANOVA are discussed, along with implications for the field.",
"title": ""
},
{
"docid": "99d9dcef0e4441ed959129a2a705c88e",
"text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions",
"title": ""
},
{
"docid": "a1b5821ec18904ad805c57e6b478ef92",
"text": "To extract English name mentions, we apply a linear-chain CRFs model trained from ACE 20032005 corpora (Li et al., 2012a). For Chinese and Spanish, we use Stanford name tagger (Finkel et al., 2005). We also encode several regular expression based rules to extract poster name mentions in discussion forum posts. In this year’s task, person nominal mentions extraction is added. There are two major challenges: (1) Only person nominal mentions referring to specific, individual real-world entities need to be extracted. Therefore, a system should be able to distinguish specific and generic person nominal mentions; (2) within-document coreference resolution should be applied to clustering person nominial and name mentions. We apply heuristic rules to try to solve these two challenges: (1) We consider person nominal mentions that appear after indefinite articles (e.g., a/an) or conditional conjunctions (e.g., if ) as generic. The person nomnial mention extraction F1 score of this approach is around 46% for English training data. (2) For coreference resolution, if the closest mention of a person nominal mention is a name, then we consider they are coreferential. The accuracy of this approach is 67% using perfect mentions in English training data.",
"title": ""
},
{
"docid": "8ea17804db874a0434bd61c55bc83aab",
"text": "Some recent work in the field of Genetic Programming (GP) has been concerned with finding optimum representations for evolvable and efficient computer programs. In this paper, I describe a new GP system in which target programs run on a stack-based virtual machine. The system is shown to have certain advantages in terms of efficiency and simplicity of implementation, and for certain classes of problems, its effectiveness is shown to be comparable or superior to current methods.",
"title": ""
},
{
"docid": "61cd88d56bcae85c12dde4c2920af2ec",
"text": "“Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St” vs. “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St after Flinders Street Station, a yellow building with a green dome.” T1: <Flinders Street Station, front, Federation Square> T2: <Flinders Street Station, color, yellow> T3: <Flinders Street Station, has, green dome> Sent: Flinders Street Station is a yellow building with a green dome roof located in front of Federation Square",
"title": ""
},
{
"docid": "0b3291e5ddfdd51a75340b195b7ffbfe",
"text": "e Knowledge graph (KG) uses the triples to describe the facts in the real world. It has been widely used in intelligent analysis and applications. However, possible noises and conicts are inevitably introduced in the process of constructing. And the KG based tasks or applications assume that the knowledge in the KG is completely correct and inevitably bring about potential deviations. In this paper, we establish a knowledge graph triple trustworthiness measurement model that quantify their semantic correctness and the true degree of the facts expressed. e model is a crisscrossing neural network structure. It synthesizes the internal semantic information in the triples and the global inference information of the KG to achieve the trustworthiness measurement and fusion in the three levels of entity level, relationship level, and KG global level. We analyzed the validity of the model output condence values, and conducted experiments in the real-world dataset FB15K (from Freebase) for the knowledge graph error detection task. e experimental results showed that compared with other models, our model achieved signicant and consistent improvements.",
"title": ""
},
{
"docid": "a11b39c895f7a89b7d2df29126671057",
"text": "A typical NURBS surface model has a large percentage of superfluous control points that significantly interfere with the design process. This paper presents an algorithm for eliminating such superfluous control points, producing a T-spline. The algorithm can remove substantially more control points than competing methods such as B-spline wavelet decomposition. The paper also presents a new T-spline local refinement algorithm and answers two fundamental open questions on T-spline theory.",
"title": ""
},
{
"docid": "4b546f3bc34237d31c862576ecf63f9a",
"text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.",
"title": ""
}
] |
scidocsrr
|
bb96da6f83753746b0a0a7f7b80623b1
|
A computer vision assisted system for autonomous forklift vehicles in real factory environment
|
[
{
"docid": "dbd7b707910d2b7ba0a3c4574a01bdaa",
"text": "Visual recognition for object grasping is a well-known challenge for robot automation in industrial applications. A typical example is pallet recognition in industrial environment for pick-and-place automated process. The aim of vision and reasoning algorithms is to help robots in choosing the best pallets holes location. This work proposes an application-based approach, which ful l all requirements, dealing with every kind of occlusions and light situations possible. Even some meaning noise (or meaning misunderstanding) is considered. A pallet model, with limited degrees of freedom, is described and, starting from it, a complete approach to pallet recognition is outlined. In the model we de ne both virtual and real corners, that are geometrical object proprieties computed by different image analysis operators. Real corners are perceived by processing brightness information directly from the image, while virtual corners are inferred at a higher level of abstraction. A nal reasoning stage selects the best solution tting the model. Experimental results and performance are reported in order to demonstrate the suitability of the proposed approach.",
"title": ""
}
] |
[
{
"docid": "1f02f9dae964a7e326724faa79f5ddc3",
"text": "The purpose of this review was to examine published research on small-group development done in the last ten years that would constitute an empirical test of Tuckman’s (1965) hypothesis that groups go through these stages of “forming,” “storming,” “norming,” and “performing.” Of the twenty-two studies reviewed, only one set out to directly test this hypothesis, although many of the others could be related to it. Following a review of these studies, a fifth stage, “adjourning.” was added to the hypothesis, and more empirical work was recommended.",
"title": ""
},
{
"docid": "9c3050cca4deeb2d94ae5cff883a2d68",
"text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "8ccb5aeb084c9a6223dc01fa296d908e",
"text": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions.",
"title": ""
},
{
"docid": "00d44e09b62be682b902b01a3f3a56c2",
"text": "A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "a89cd3351d6a427d18a461893949e0d7",
"text": "Touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses in relation to certain emotions. Within this paper we present the findings from a study exploring the communication of emotions through a haptic system that uses tactile stimulation in mid-air. Here, haptic descriptions for specific emotions (e.g., happy, sad, excited, afraid) were created by one group of users to then be reviewed and validated by two other groups of users. We demonstrate the non-arbitrary mapping between emotions and haptic descriptions across three groups. This points to the huge potential for mediating emotions through mid-air haptics. We discuss specific design implications based on the spatial, directional, and haptic parameters of the created haptic descriptions and illustrate their design potential for HCI based on two design ideas.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "ec4638bad4caf17de83ac3557254c4bf",
"text": "Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDP by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. Our explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate our technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate our explanations for courseadvising through a user study involving students.",
"title": ""
},
{
"docid": "fe3a3ffab9a98cf8f4f71c666383780c",
"text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.",
"title": ""
},
{
"docid": "369746e53baad6fef5df42935fb5c516",
"text": "SWOT analysis is an established method for assisting the formulation of strategy. An application to strategy formulation and its incorporation into the strategic development process at the University of Warwick is described. The application links SWOT analysis to resource-based planning, illustrates it as an iterative rather than a linear process and embeds it within the overall planning process. Lessons are drawn both for the University and for the strategy formulation process itself. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f35007fdca9c35b4c243cb58bd6ede7a",
"text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).",
"title": ""
},
{
"docid": "634c58784820e70145b417f51414fc96",
"text": "A considerable number of studies have been undertaken on using smart card data to analyse urban mobility. Most of these studies aim to identify recurrent passenger habits, reveal mobility patterns, reconstruct and predict passenger flows, etc. Forecasting mobility demand is a central problem for public transport authorities and operators alike. It is the first step to efficient allocation and optimisation of available resources. This paper explores an innovative approach to forecasting dynamic Origin-Destination (OD) matrices in a subway network using long Short-term Memory (LSTM) recurrent neural networks. A comparison with traditional approaches, such as calendar methodology or Vector Autoregression is conducted on a real smart card dataset issued from the public transport network of Rennes Métropole, France. The obtained results show that reliable short-term prediction (over a 15 minutes time horizon) of OD pairs can be achieved with the proposed approach. We also experiment with the effect of taking into account additional data about OD matrices of nearby transport systems (buses in this case) on the prediction accuracy.",
"title": ""
},
{
"docid": "1f27caaaeae8c82db6a677f66f2dee74",
"text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"title": ""
},
{
"docid": "71c31f41d116a51786a4e8ded2c5fb87",
"text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.",
"title": ""
},
{
"docid": "176dc97bd2ce3c1fd7d3a8d6913cff70",
"text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.",
"title": ""
},
{
"docid": "8d350db000f7a2b1481b9cad6ce318f1",
"text": "Purpose – The purpose of this research paper is to offer a solution to differentiate supply chain planning for products with different demand features and in different life-cycle phases. Design/methodology/approach – A normative framework for selecting a planning approach was developed based on a literature review of supply chain differentiation and supply chain planning. Explorative mini-cases from three companies – Vaisala, Mattel, Inc. and Zara – were investigated to identify the features of their innovative planning solutions. The selection framework was applied to the case company’s new business unit dealing with a product portfolio of highly innovative products as well as commodity items. Findings – The need for planning differentiation is essential for companies with large product portfolios operating in volatile markets. The complexity of market, channel and supply networks makes supply chain planning more intricate. The case company provides an example of using the framework for rough segmentation to differentiate planning. Research limitations/implications – The paper widens Fisher’s supply chain selection framework to consider the aspects of planning. Practical implications – Despite substantial resources being used, planning results are often not reliable or consistent enough to ensure cost efficiency and adequate customer service. Therefore there is a need for management to critically consider current planning solutions. Originality/value – The procedure outlined in this paper is a first illustrative example of the type of processes needed to monitor and select the right planning approach.",
"title": ""
},
{
"docid": "4b013b69e174914aafc09100e182dd14",
"text": "The network of patents connected by citations is an evolving graph, which provides a representation of the innovation process. A patent citing another implies that the cited patent reflects a piece of previously existing knowledge that the citing patent builds upon. A methodology presented here (1) identifies actual clusters of patents: i.e., technological branches, and (2) gives predictions about the temporal changes of the structure of the clusters. A predictor, called the citation vector, is defined for characterizing technological development to show how a patent cited by other patents belongs to various industrial fields. The clustering technique adopted is able to detect the new emerging recombinations, and predicts emerging new technology clusters. The predictive ability of our new method is illustrated on the example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of patents is determined based on citation data up to 1991, which shows significant overlap of the class 442 formed at the beginning of 1997. These new tools of predictive analytics could support policy decision making processes in science and technology, and help formulate recommendations for action.",
"title": ""
},
{
"docid": "ef8a61d3ff3aad461c57fe893e0b5bb6",
"text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.",
"title": ""
}
] |
scidocsrr
|
19e407b8d995f901f24f776c36cc6bf9
|
Image quality quantification for fingerprints using quality-impairment assessment
|
[
{
"docid": "c1b79f29ce23b2d0ba97928831302e18",
"text": "Quality assessment of biometric fingerprint images is necessary to ensure high biometric performance in biometric recognition systems. We relate the quality of a fingerprint sample to the biometric performance to ensure an objective and performance oriented benchmark. The proposed quality metric is based on Gabor filter responses and is evaluated against eight contemporary quality estimation methods on four datasets using sample utility derived from the separation of genuine and imposter distributions as benchmark. The proposed metric shows performance and consistency approaching that of the composite NFIQ quality assessment algorithm and is thus a candidate for inclusion in a feature vector introducing the NFIQ 2.0 metric.",
"title": ""
},
{
"docid": "1a9be0a664da314c143ca430bd6f4502",
"text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.",
"title": ""
}
] |
[
{
"docid": "32417703b8291a5cdcc3c9eaabbdb99c",
"text": "Purpose – The aim of this paper is to identify the quality determinants for education services provided by higher education institutions (HEIs) in Greece and to measure their relative importance from the students’ points of view. Design/mthodology/approach – A multi-criteria decision-making methodology was used for assessing the relative importance of quality determinants that affect student satisfaction. More specifically, the analytical hierarchical process (AHP) was used in order to measure the relative weight of each quality factor. Findings – The relative weights of the factors that contribute to the quality of educational services as it is perceived by students was measured. Research limitations/implications – The research is based on the questionnaire of the Hellenic Quality Assurance Agency for Higher Education. This implies that the measured weights are related mainly to questions posed in this questionnaire. However, the applied method (AHP) can be used to assess different quality determinants. Practical implications – The outcome of this study can be used in order to quantify internal quality assessment of HEIs. More specifically, the outcome can be directly used by HEIs for assessing quality as perceived by students. Originality/value – The paper attempts to develop insights into comparative evaluations of quality determinants as they are perceived by students.",
"title": ""
},
{
"docid": "f8b24b0e8b440643a5fb49166cbbd96b",
"text": "A Proportional-Integral (PI) based Maximum Power Point Tracking (MPPT) control algorithm is proposed in this study where it is applied to a Buck-Boost converter. It is aimed to combine regular PI control and MPPT technique to enhance the generated power from photovoltaic PV) panels. The perturb and observe (P&O) technique is used as the MPPT control algorithm. The study proposes to reduce converter output oscillation owing to implemented MPPT control technique with additional PI observer. Furthermore aims to optimize output power using PI voltage mode closed-loop structure.",
"title": ""
},
{
"docid": "47b4b22cee9d5693c16be296afe61982",
"text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.",
"title": ""
},
{
"docid": "e33b3ebfc46c371253cf7f68adbbe074",
"text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.",
"title": ""
},
{
"docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54",
"text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.",
"title": ""
},
{
"docid": "505a9b6139e8cbf759652dc81f989de9",
"text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern",
"title": ""
},
{
"docid": "e1d635202eb482e49ff736fd37d161ac",
"text": "Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "4acfb49be406de472af9080d3cdc6fa4",
"text": "Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.",
"title": ""
},
{
"docid": "059b8861a00bb0246a07fa339b565079",
"text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.",
"title": ""
},
{
"docid": "17321e451d7441c8a434c637237370a2",
"text": "In recent years, there are increasing interests in using path identifiers (<inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula>) as inter-domain routing objects. However, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PID$ </tex-math></inline-formula> of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> and how to maintain ongoing communications when <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.",
"title": ""
},
{
"docid": "0ba15705fcd12cb3efa17a6878c43606",
"text": "Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant's microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth's position on the body and the user's language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.",
"title": ""
},
{
"docid": "38715a7ba5efc87b47491d9ced8c8a31",
"text": "We propose a new method for fusing a LIDAR point cloud and camera-captured images in the deep convolutional neural network (CNN). The proposed method constructs a new layer called non-homogeneous pooling layer to transform features between bird view map and front view map. The sparse LIDAR point cloud is used to construct the mapping between the two maps. The pooling layer allows efficient fusion of the bird view and front view features at any stage of the network. This is favorable for the 3D-object detection using camera-LIDAR fusion in autonomous driving scenarios. A corresponding deep CNN is designed and tested on the KITTI[1] bird view object detection dataset, which produces 3D bounding boxes from the bird view map. The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks.",
"title": ""
},
{
"docid": "2caf8a90640a98f3690785b6dd641e08",
"text": "This paper presents a simple, novel, yet very powerful approach for robust rotation-invariant texture classification based on random projection. The proposed sorted random projection maintains the strengths of random projection, in being computationally efficient and low-dimensional, with the addition of a straightforward sorting step to introduce rotation invariance. At the feature extraction stage, a small set of random measurements is extracted from sorted pixels or sorted pixel differences in local image patches. The rotation invariant random features are embedded into a bag-of-words model to perform texture classification, allowing us to achieve global rotation invariance. The proposed unconventional and novel random features are very robust, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We report extensive experiments comparing the proposed method to six state-of-the-art methods, RP, Patch, LBP, WMFS and the methods of Lazebnik et al. and Zhang et al., in texture classification on five databases: CUReT, Brodatz, UIUC, UMD and KTH-TIPS. Our approach leads to significant improvements in classification accuracy, producing consistently good results on each database, including what we believe to be the best reported results for Brodatz, UMD and KTH-TIPS. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "07153810148e93a0bc0b62a6de77594c",
"text": "Six healthy young male volunteers at a contract research organization were enrolled in the first phase 1 clinical trial of TGN1412, a novel superagonist anti-CD28 monoclonal antibody that directly stimulates T cells. Within 90 minutes after receiving a single intravenous dose of the drug, all six volunteers had a systemic inflammatory response characterized by a rapid induction of proinflammatory cytokines and accompanied by headache, myalgias, nausea, diarrhea, erythema, vasodilatation, and hypotension. Within 12 to 16 hours after infusion, they became critically ill, with pulmonary infiltrates and lung injury, renal failure, and disseminated intravascular coagulation. Severe and unexpected depletion of lymphocytes and monocytes occurred within 24 hours after infusion. All six patients were transferred to the care of the authors at an intensive care unit at a public hospital, where they received intensive cardiopulmonary support (including dialysis), high-dose methylprednisolone, and an anti-interleukin-2 receptor antagonist antibody. Prolonged cardiovascular shock and acute respiratory distress syndrome developed in two patients, who required intensive organ support for 8 and 16 days. Despite evidence of the multiple cytokine-release syndrome, all six patients survived. Documentation of the clinical course occurring over the 30 days after infusion offers insight into the systemic inflammatory response syndrome in the absence of contaminating pathogens, endotoxin, or underlying disease.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "41c5dbb3e903c007ba4b8f37d40b06ef",
"text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.",
"title": ""
},
{
"docid": "4e5d46d9bb7b9edbc4fc6a42b6314703",
"text": "Positive body image among adults is related to numerous indicators of well-being. However, no research has explored body appreciation among children. To facilitate our understanding of children’s positive body image, the current study adapts and validates the Body Appreciation Scale-2 (BAS-2; Tylka & WoodBarcalow, 2015a) for use with children. Three hundred and forty-four children (54.4% girls) aged 9–11 completed the adapted Body Appreciation Scale-2 for Children (BAS-2C) alongside measures of body esteem, media influence, body surveillance, mood, and dieting. A sub-sample of 154 participants (62.3% girls) completed the questionnaire 6-weeks later to examine stability (test-retest) reliability. The BAS-2C",
"title": ""
},
{
"docid": "35f8b54ee1fbf153cb483fc4639102a5",
"text": "This research studies the risk prediction of hospital readmissions using metaheuristic and data mining approaches. This is a critical issue in the U.S. healthcare system because a large percentage of preventable hospital readmissions derive from a low quality of care during patients’ stays in the hospital as well as poor arrangement of the discharge process. To reduce the number of hospital readmissions, the Centers for Medicare and Medicaid Services has launched a readmission penalty program in which hospitals receive reduced reimbursement for high readmission rates for Medicare beneficiaries. In the current practice, patient readmission risk is widely assessed by evaluating a LACE score including length of stay (L), acuity level of admission (A), comorbidity condition (C), and use of emergency rooms (E). However, the LACE threshold classifying highand low-risk readmitted patients is set up by clinic practitioners based on specific circumstances and experiences. This research proposed various data mining approaches to identify the risk group of a particular patient, including neural network model, random forest (RF) algorithm, and the hybrid model of swarm intelligence heuristic and support vector machine (SVM). The proposed neural network algorithm, the RF and the SVM classifiers are used to model patients’ characteristics, such as their ages, insurance payers, medication risks, etc. Experiments are conducted to compare the performance of the proposed models with previous research. Experimental results indicate that the proposed prediction SVM model with particle swarm parameter tuning outperforms other algorithms and achieves 78.4% on overall prediction accuracy, 97.3% on sensitivity. The high sensitivity shows its strength in correctly identifying readmitted patients. The outcome of this research will help reduce overall hospital readmission rates and allow hospitals to utilize their resources more efficiently to enhance interventions for high-risk patients. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "86e0c7b70de40fcd5179bf3ab67bc3a4",
"text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.",
"title": ""
}
] |
scidocsrr
|
a16f0041754899e1f6101f7b8a5d82a6
|
Agile Software Development Methodologies and Practices
|
[
{
"docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb",
"text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.",
"title": ""
}
] |
[
{
"docid": "19f4100f2e1d5655edca03a269adf79a",
"text": "OBJECTIVES\nTo assess the influence of conventional glass ionomer cement (GIC) vs resin-modified GIC (RMGIC) as a base material for novel, super-closed sandwich restorations (SCSR) and its effect on shrinkage-induced crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slottype tooth preparation was applied to 30 extracted maxillary molars (5 mm depth/5 mm buccolingual width). A modified sandwich restoration was used, in which the enamel/dentin bonding agent was applied first (Optibond FL, Kerr), followed by a Ketac Molar (3M ESPE)(group KM, n = 15) or Fuji II LC (GC) (group FJ, n = 15) base, leaving 2 mm for composite resin material (Miris 2, Coltène-Whaledent). Shrinkageinduced enamel cracks were tracked with photography and transillumination. Samples were loaded until fracture or to a maximum of 185,000 cycles under isometric chewing (5 H z), starting with a load of 200 N (5,000 X), followed by stages of 400, 600, 800, 1,000, 1,200, and 1,400 N at a maximum of 30,000 X each. Groups were compared using the life table survival analysis (α = .008, Bonferroni method).\n\n\nRESULTS\nGroup FJ showed the highest survival rate (40% intact specimens) but did not differ from group KM (20%) or traditional direct restorations (13%, previous data). SCSR generated less shrinkage-induced cracks. Most failures were re-restorable (above the cementoenamel junction [CEJ]).\n\n\nCONCLUSIONS\nInclusion of GIC/RMGIC bases under large direct SCSRs does not affect their fatigue strength but tends to decrease the shrinkage-induced crack propensity.\n\n\nCLINICAL SIGNIFICANCE\nThe use of GIC/ RMGIC bases and the SCSR is an easy way to minimize polymerization shrinkage stress in large MOD defects without weakening the restoration.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "88804c0fb16e507007983108811950dc",
"text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.",
"title": ""
},
{
"docid": "0513ce3971cb0e438598ea6766be19ff",
"text": "This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.",
"title": ""
},
{
"docid": "5e5e2d038ae29b4c79c79abe3d20ae40",
"text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3f43eef5e36eb7b078b010482bdb115",
"text": "This study is aimed at constructing a correlative model between Internet addiction and mobile phone addiction; the aim is to analyse the correlation (if any) between the two traits and to discuss the influence confirming that the gender has difference on this fascinating topic; taking gender into account opens a new world of scientific study to us. The study collected 448 college students on an island as study subjects, with 61.2% males and 38.8% females. Moreover, this study issued Mobile Phone Addiction Scale and Internet Addiction Scale to conduct surveys on the participants and adopts the structural equation model (SEM) to process the collected data. According to the study result, (1) mobile phone addiction and Internet addiction are positively related; (2) female college students score higher than male ones in the aspect of mobile addiction. Lastly, this study proposes relevant suggestions to serve as a reference for schools, college students, and future studies based on the study results.",
"title": ""
},
{
"docid": "a66b5b6dea68e5460b227af4caa14ef3",
"text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.",
"title": ""
},
{
"docid": "37d3bf208ee4e513a809fa94f93a2654",
"text": "Unplanned use of fertilizers leads to inferior quality of crops. Excess of one nutrient can make it difficult for the plant to absorb the other nutrients. To deal with this problem, the quality of soil is tested using a PH sensor that indicates the percentage of macronutrients present in the soil. Conventional methods used to test soil quality, involve the use of Ion Selective Field Effect Transistors (ISFET), Ion Selective Electrode (ISE) and Optical Sensors as the sensing units which were found to be very expensive. The prototype design will allow sprinkling of fertilizers to take place in zones which are deficient in these macronutrients (Nitrogen, Phosphorous and Potassium), proving it to be a cost efficient and farmer-friendly automated fertilization unit. Cost of the proposed unit is found to be one-seventh of that of the present methods, making it affordable for farmers and also saves the manual labor. Initial analysis and intensive case studies conducted in farmland situated near Ambedkar Nagar, Sarjapur also revealed the use of above mechanism to be more prominent and verified through practical implementation and experimentation as it takes lesser time to analyze the nutrient content than the other methods which require soil testing. Sprinklers cover discrete zones in the field that automate fertilization and reduce the effort of farmers in the rural areas. This novel technique also has a fast response time as it enables real time, in-situ soil nutrient analysis, thereby maintaining proper soil pH level required for a particular crop, reducing potentially negative environmental impacts.",
"title": ""
},
{
"docid": "20cbfe9c1d20bfd67bbcbf39641aa69a",
"text": "The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.",
"title": ""
},
{
"docid": "080032ded41edee2a26320e3b2afb123",
"text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.",
"title": ""
},
{
"docid": "af11d259a031d22f7ee595ee2a250136",
"text": "Cellular networks today are designed for and operate in dedicated licensed spectrum. At the same time there are other spectrum usage authorization models for wireless communication, such as unlicensed spectrum or, as widely discussed currently but not yet implemented in practice, various forms of licensed shared spectrum. Hence, cellular technology as of today can only operate in a subset of the spectrum that is in principle available. Hence, a future wireless system may benefit from the ability to access also spectrum opportunities other than dedicated licensed spectrum. It is therefore important to identify which additional ways of authorizing spectrum usage are deemed to become relevant in the future and to analyze the resulting technical requirements. The implications of sharing spectrum between different technologies are analyzed in this paper, both from efficiency and technology neutrality perspective. Different known sharing techniques are outlined and their applicability to the relevant range of future spectrum regulatory regimes is discussed. Based on an assumed range of relevant (according to the views of the authors) future spectrum sharing scenarios, a toolbox of certain spectrum sharing techniques is proposed as the basis for the design of spectrum sharing related functionality in future mobile broadband systems.",
"title": ""
},
{
"docid": "10d41334c88039e9d85ce6eb93cb9abf",
"text": "nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF",
"title": ""
},
{
"docid": "d763947e969ade3c54c18f0b792a0f7b",
"text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.",
"title": ""
},
{
"docid": "bc6cbf7da118c01d74914d58a71157ac",
"text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "950a6a611f1ceceeec49534c939b4e0f",
"text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].",
"title": ""
},
{
"docid": "a3ac978e59bdedc18c45d460dd8fc154",
"text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.",
"title": ""
},
{
"docid": "0feae39f7e557a65699f686d14f4cf0f",
"text": "This paper describes the design of a multi-gigabit fiber-optic receiver with integrated large-area photo detectors for plastic optical fiber applications. An integrated 250 μm diameter non-SML NW/P-sub photo detector is adopted to allow efficient light coupling. The theory of applying a fully-differential pre-amplifier with a single-ended photo current is also examined and a super-Gm transimpedance amplifier has been proposed to drive a C PD of 14 pF to multi-gigahertz frequency. Both differential and common-mode operations of the proposed super-Gm transimpedance amplifier have been analyzed and a differential noise analysis is performed. A digitally-controlled linear equalizer is proposed to produce a slow-rising-slope frequency response to compensate for the photo detector up to 3 GHz. The proposed POF receiver consists of an illuminated signal photo detector, a shielded dummy photo detector, a super-Gm transimpedance amplifier, a variable-gain amplifier, a linear equalizer, a post amplifier, and an output driver. A test chip is fabricated in TSMC's 65 nm low-power CMOS process, and it consumes 50 mW of DC power (excluding the output driver) from a single 1.2 V supply. A bit-error rate of less than 10-12 has been measured at a data rate of 3.125 Gbps with a 670 nm VCSEL-based electro-optical transmitter.",
"title": ""
},
{
"docid": "5b6d68984b4f9a6e0f94e0a68768dc8c",
"text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF",
"title": ""
},
{
"docid": "6459493643eb7ff011fa0d8873382911",
"text": "This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.",
"title": ""
}
] |
scidocsrr
|
5696d4593a6c514e4916dab560dc94f5
|
Chapter LVIII The Design , Play , and Experience Framework
|
[
{
"docid": "ecddd4f80f417dcec49021065394c89a",
"text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
}
] |
[
{
"docid": "e737c117cd6e7083cd50069b70d236cb",
"text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.",
"title": ""
},
{
"docid": "9dcee1244dd71174b15df9cfaba2ebdf",
"text": "In this paper, we investigate the dynamical behaviors of a Morris–Lecar neuron model. By using bifurcation methods and numerical simulations, we examine the global structure of bifurcations of the model. Results are summarized in various two-parameter bifurcation diagrams with the stimulating current as the abscissa and the other parameter as the ordinate. We also give the one-parameter bifurcation diagrams and pay much attention to the emergence of periodic solutions and bistability. Different membrane excitability is obtained by bifurcation analysis and frequency-current curves. The alteration of the membrane properties of the Morris–Lecar neurons is discussed.",
"title": ""
},
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "1fe0a9895bca5646908efc86e019f5d3",
"text": "The purpose of this study was to examine how violence from patients and visitors is related to emergency department (ED) nurses' work productivity and symptoms of post-traumatic stress disorder (PTSD). Researchers have found ED nurses experience a high prevalence of physical assaults from patients and visitors. Yet, there is little research which examines the effect violent events have on nurses' productivity, particularly their ability to provide safe and compassionate patient care. A cross-sectional design was used to gather data from ED nurses who are members of the Emergency Nurses Association in the United States. Participants were asked to complete the Impact of Events Scale-Revised and Healthcare Productivity Survey in relation to a stressful violent event. Ninety-four percent of nurses experienced at least one posttraumatic stress disorder symptom after a violent event, with 17% having scores high enough to be considered probable for PTSD. In addition, there were significant indirect relationships between stress symptoms and work productivity. Workplace violence is a significant stressor for ED nurses. Results also indicate violence has an impact on the care ED nurses provide. Interventions are needed to prevent the violence and to provide care to the ED nurse after an event.",
"title": ""
},
{
"docid": "3e6e72747036ca7255b449f4c93e15f7",
"text": "In this paper a planar antenna is studied for ultrawide-band (UWB) applications. This antenna consists of a wide-band tapered-slot feeding structure, curved radiators and a parasitic element. It is a modification of the conventional dual exponential tapered slot antenna and can be viewed as a printed dipole antenna with tapered slot feed. The design guideline is introduced, and the antenna parameters including return loss, radiation patterns and gain are investigated. To demonstrate the applicability of the proposed antenna to UWB applications, the transfer functions of a transmitting-receiving system with a pair of identical antennas are measured. Transient waveforms as the transmitting-receiving system being excited by a simulated pulse are discussed at the end of this paper.",
"title": ""
},
{
"docid": "7cb6582bf81aea75818eef2637c95c79",
"text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.",
"title": ""
},
{
"docid": "e4183c85a9f6771fa06316b002e13188",
"text": "This paper provides an analysis of some argumentation in a biomedical genetics research article as a step towards developing a corpus of articles annotated to support research on argumentation. We present a specification of several argumentation schemes and inter-argument relationships to be annotated.",
"title": ""
},
{
"docid": "b515eb759984047f46f9a0c27b106f47",
"text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.",
"title": ""
},
{
"docid": "a743ac1f5b37c35bb78cf7efc3d3a3c8",
"text": "Concepts concerning mediation in the causal inference literature are reviewed. Notions of direct and indirect effects from a counterfactual approach to mediation are compared with those arising from the standard regression approach to mediation of Baron and Kenny (1986), commonly utilized in the social science literature. It is shown that concepts of direct and indirect effect from causal inference generalize those described by Baron and Kenny and that under appropriate identification assumptions these more general direct and indirect effects from causal inference can be estimated using regression even when there are interactions between the primary exposure of interest and the mediator. A number of conceptual issues are discussed concerning the interpretation of identification conditions for mediation, the notion of counterfactuals based on hypothetical interventions and the so called consistency and composition assumptions.",
"title": ""
},
{
"docid": "55610ac91c3abb52e3bbd95c289b9b95",
"text": "A robot finger is developed for five-fingered robot hand having equal number of DOF to human hand. The robot hand is driven by a new method proposed by authors using ultrasonic motors and elastic elements. The method utilizes restoring force of elastic element as driving power for grasping an object, so that the hand can perform the soft and stable grasping motion with no power supply. In addition, all the components are placed inside the hand thanks to the ultrasonic motors with compact size and high torque at low speed. Applying the driving method to multi-DOF mechanism, a robot index finger is designed and implemented. It has equal number of joints and DOF to human index finger, and it is also equal in size to the finger of average adult male. The performance of the robot finger is confirmed by fundamental driving test.",
"title": ""
},
{
"docid": "413c4d1115e8042cce44308583649279",
"text": "With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.",
"title": ""
},
{
"docid": "9ffb34f554e9d31938b77a33be187014",
"text": "Job recommendation systems mainly use different sources of data in order to give the better content for the end user. Developing the well-performing system requires complex hybrid approaches of representing similarity based on the content of job postings and resumes as well as interactions between them. We develop an efficient hybrid networkbased job recommendation system which uses Personalized PageRank algorithm in order to rank vacancies for the users based on the similarity between resumes and job posts as textual documents, along with previous interactions of users with vacancies. Our approach achieved the recall of 50% and generated more applies for the jobs during the online A/B test than previous algorithms.",
"title": ""
},
{
"docid": "a9b620269c6448facfe0ae8e034f41fa",
"text": "The aim of this project is to make progress towards building a machine learning agent that understands natural language and can perform basic reasoning. Towards this nebulous goal, we focus on question answering: Can an agent answer a query based on a given set of natural language facts? We combine LSTM sentence embedding models with an attention mechanism and obtain good results on the Facebook bAbI dataset [1], outperforming [2] on 1 task and achieving similar performance on several others.",
"title": ""
},
{
"docid": "507a60e62e9d2086481e7a306d012e52",
"text": "Health monitoring systems have rapidly evolved recently, and smart systems have been proposed to monitor patient current health conditions, in our proposed and implemented system, we focus on monitoring the patient's blood pressure, and his body temperature. Based on last decade statistics of medical records, death rates due to hypertensive heart disease, shows that the blood pressure is a crucial risk factor for atherosclerosis and ischemic heart diseases; thus, preventive measures should be taken against high blood pressure which provide the ability to track, trace and save patient's life at appropriate time is an essential need for mankind. Nowadays, Globalization demands Smart cities, which involves many attributes and services, such as government services, Intelligent Transportation Systems (ITS), energy, health care, water and waste. This paper proposes a system architecture for smart healthcare based on GSM and GPS technologies. The objective of this work is providing an effective application for Real Time Health Monitoring and Tracking. The system will track, trace, monitor patients and facilitate taking care of their health; so efficient medical services could be provided at appropriate time. By Using specific sensors, the data will be captured and compared with a configurable threshold via microcontroller which is defined by a specialized doctor who follows the patient; in any case of emergency a short message service (SMS) will be sent to the Doctor's mobile number along with the measured values through GSM module. furthermore, the GPS provides the position information of the monitored person who is under surveillance all the time. Moreover, the paper demonstrates the feasibility of realizing a complete end-to-end smart health system responding to the real health system design requirements by taking in consideration wider vital human health parameters such as respiration rate, nerves signs ... etc. The system will be able to bridge the gap between patients - in dramatic health change occasions- and health entities who response and take actions in real time fashion.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "7ae332505306f94f8f2b4e3903188126",
"text": "Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.",
"title": ""
},
{
"docid": "acb0f1e123cb686b4aeab418f380bd79",
"text": "Surface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and precomputation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this article, we present an automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch.\n Many objects consist of regions of relatively simple shapes, each of which has a natural parameterization. Based on this observation, we describe a three-stage feature-based patch creation method for manifold surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based surface functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points.\n To reduce stretch during patch unfolding, we notice that stretch is a 2 × 2 tensor, which in ideal situations is the identity. Therefore, we use the <i>Green-Lagrange tensor</i> to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding <i>scaffold triangles</i>. We demonstrate our feature-based patch creation and patch unfolding methods for several textured models.\n Finally, to evaluate the quality of a given parameterization, we describe an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility.",
"title": ""
},
{
"docid": "9eabe9a867edbceee72bd20d483ad886",
"text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"title": ""
},
{
"docid": "a0a13e7e5ce06e5cc28a2b23ea64c8f5",
"text": "The efficacy study was performed to prove the equivalent efficacy of dexibuprofen compared to the double dose of racemic ibuprofen and to show a clinical dose-response relationship of dexibuprofen. The 1-year tolerability study was carried out to investigate the tolerability of dexibuprofen. In the efficacy study 178 inpatients with osteoarthritis of the hip were assigned to 600 or 1200 mg of dexibuprofen or 2400 mg of racemic ibuprofen daily. The primary end-point was the improvement of the WOMAC OA index. A 1-year open tolerability study included 223 outpatients pooled from six studies. The main parameter was the incidence of clinical adverse events. In the efficacy study the evaluation of the improvement of the WOMAC OA index showed equivalence of dexibuprofen 400 mg t.i.d. compared to racemic ibuprofen 800 mg t.i.d., with dexibuprofen being borderline superior (P = 0.055). The comparison between the 400 mg t.i.d. and 200 mg t.i.d. doses confirmed a significant superior efficacy of dexibuprofen 400 mg (P = 0.023). In the tolerability study the overall incidence of clinical adverse events was 15.2% (GI tract 11.7%, CNS 1.3%, skin 1.3%, others 0.9%). The active enantiomer dexibuprofen proved to be an effective NSAID with a significant dose-response relationship. Compared to the double dose of racemic ibuprofen, dexibuprofen was at least equally efficient, with borderline superiority over dexibuprofen (P = 0.055). The tolerability study in 223 patients on dexibuprofen showed an incidence of clinical adverse events of 15.2% after 12 months. The results of the studies suggest that dexibuprofen is an effective NSAID with good tolerability.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
scidocsrr
|
08783703748f4805351206e24d216c29
|
Development of extensible open information extraction
|
[
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
},
{
"docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c",
"text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.",
"title": ""
}
] |
[
{
"docid": "f271596a45a3104554bfe975ac8b4d6c",
"text": "In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli.",
"title": ""
},
{
"docid": "4538c5874872a0081593407d09e4c6fa",
"text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.",
"title": ""
},
{
"docid": "d4793c300bca8137d0da7ffdde75a72b",
"text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.",
"title": ""
},
{
"docid": "3b54f22dd95670f618650f2d71e58068",
"text": "This paper proposes a novel multi-view human action recognition method by discovering and sharing common knowledge among different video sets captured in multiple viewpoints. To our knowledge, we are the first to treat a specific view as target domain and the others as source domains and consequently formulate the multi-view action recognition into the cross-domain learning framework. First, the classic bag-of-visual word framework is implemented for visual feature extraction in individual viewpoints. Then, we propose a cross-domain learning method with block-wise weighted kernel function matrix to highlight the saliency components and consequently augment the discriminative ability of the model. Extensive experiments are implemented on IXMAS, the popular multi-view action dataset. The experimental results demonstrate that the proposed method can consistently outperform the state of the arts.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "5fa860515f72bca0667134bb61d2f695",
"text": "In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.",
"title": ""
},
{
"docid": "f19f6c8caec01e3ca9c14981c0ea05fa",
"text": "Non-invasive cuff-less Blood Pressure (BP) estimation from Photoplethysmogram (PPG) is a well known challenge in the field of affordable healthcare. This paper presents a set of improvements over an existing method that estimates BP using 2-element Windkessel model from PPG signal. A noisy PPG corpus is collected using fingertip pulse oximeter, from two different locations in India. Exhaustive pre-processing techniques, such as filtering, baseline and topline correction are performed on the noisy PPG signals, followed by the selection of consistent cycles. Subsequently, the most relevant PPG features and demographic features are selected through Maximal Information Coefficient (MIC) score for learning the latent parameters controlling BP. Experimental results reveal that overall error in estimating BP lies within 10% of a commercially available digital BP monitoring device. Also, use of alternative latent parameters that incorporate the variation in cardiac output, shows a better trend following for abnormally low and high BP.",
"title": ""
},
{
"docid": "bd42bffcbb76d4aadde3df502326655a",
"text": "We present a novel class of actor-critic algorithms for actors consisting of sets of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the module’s input, output, and the TD error broadcast by a critic. Such updates are necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement learning methods.",
"title": ""
},
{
"docid": "eee5ffff364575afad1dcebbf169777b",
"text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies",
"title": ""
},
{
"docid": "7456ceee02f50c9e92a665d362a9a419",
"text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "fd0cfef7be75a9aa98229c25ffaea864",
"text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.",
"title": ""
},
{
"docid": "5f78f4f492b45eb5efd50d2cda340413",
"text": "This study examined the anatomy of the infrapatellar fat pad (IFP) in relation to knee pathology and surgical approaches. Eight embalmed knees were dissected via semicircular parapatellar incisions and each IFP was examined. Their volume, shape and constituent features were recorded. They were found in all knees and were constant in shape, consisting of a central body with medial and lateral extensions. The ligamentum mucosum was found inferior to the central body in all eight knees, while a fat tag was located superior to the central body in seven cases. Two clefts were consistently found on the posterior aspect of the IFP, a horizontal cleft below the ligamentum mucosum in six knees and a vertical cleft above, in seven cases. Our study found that the IFP is a constant structure in the knee joint, which may play a number of roles in knee joint function and pathology. Its significance in knee surgery is discussed.",
"title": ""
},
{
"docid": "fed23432144a6929c4f3442b10157771",
"text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa",
"title": ""
},
{
"docid": "e76afdc4a867789e6bcc92876a6b52af",
"text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions' (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.",
"title": ""
},
{
"docid": "15fd626d5a6eb1258b8846137c62f97d",
"text": "Since leadership plays a vital role in democratic movements, understanding the nature of democratic leadership is essential. However, the definition of democratic leadership is unclear (Gastil, 1994). Also, little research has defined democratic leadership in the context of democratic movements. The leadership literature has paid no attention to democratic leadership in such movements, focusing on democratic leadership within small groups and organizations. This study proposes a framework of democratic leadership in democratic movements. The framework includes contexts, motivations, characteristics, and outcomes of democratic leadership. The study considers sacrifice, courage, symbolism, citizen participation, and vision as major characteristics in the display of democratic leadership in various political, social, and cultural contexts. Applying the framework to Nelson Mandela, Lech Walesa, and Dae Jung Kim; the study considers them as exemplary models of democratic leadership in democratic movements for achieving democracy. They have showed crucial characteristics of democratic leadership, offering lessons for democratic governance.",
"title": ""
},
{
"docid": "74ecfe68112ba6309ac355ba1f7b9818",
"text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.",
"title": ""
},
{
"docid": "7100b0adb93419a50bbaeb1b7e32edf5",
"text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.",
"title": ""
},
{
"docid": "2cfc7eeae3259a43a24ef56932d8b27f",
"text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.",
"title": ""
}
] |
scidocsrr
|
811485a5cf46d72e029480ba51b2cbbe
|
Determining the Chemical Compositions of Garlic Plant and its Existing Active Element
|
[
{
"docid": "85e63b1689e6fd77cdfc1db191ba78ee",
"text": "Singh VK, Singh DK. Pharmacological Effects of Garlic (Allium sativum L.). ARBS Annu Rev Biomed Sci 2008;10:6-26. Garlic (Allium sativum L.) is a bulbous herb used as a food item, spice and medicine in different parts of the world. Its medicinal use is based on traditional experience passed from generation to generation. Researchers from various disciplines are now directing their efforts towards discovering the effects of garlic on human health. Interest in garlic among researchers, particularly those in medical profession, has stemmed from the search for a drug that has a broad-spectrum therapeutic effect with minimal toxicity. Recent studies indicate that garlic extract has antimicrobial activity against many genera of bacteria, fungi and viruses. The role of garlic in preventing cardiovascular disease has been acclaimed by several authors. Chemical constituents of garlic have been investigated for treatment of hyperlipidemia, hypertension, platelet aggregation and blood fibrinolytic activity. Experimental data indicate that garlic may have anticarcinogenic effect. Recent researches in the area of pest control show that garlic has strong insecticidal, nematicidal, rodenticidal and molluscicidal activity. Despite field trials and laboratory experiments on the pesticidal activity of garlic have been conducted, more studies on the way of delivery in environment and mode of action are still recommended for effective control of pest. Adverse effects of oral ingestion and topical exposure of garlic include body odor, allergic reactions, acceleration in the effects of anticoagulants and reduction in the efficacy of anti-AIDS drug Saquinavir. ©by São Paulo State University ISSN 1806-8774",
"title": ""
}
] |
[
{
"docid": "2b3335d6fb1469c4848a201115a78e2c",
"text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.",
"title": ""
},
{
"docid": "e273298153872073e463662b5d6d8931",
"text": "The lack of readily-available large corpora of aligned monolingual sentence pairs is a major obstacle to the development of Statistical Machine Translation-based paraphrase models. In this paper, we describe the use of annotated datasets and Support Vector Machines to induce larger monolingual paraphrase corpora from a comparable corpus of news clusters found on the World Wide Web. Features include: morphological variants; WordNet synonyms and hypernyms; loglikelihood-based word pairings dynamically obtained from baseline sentence alignments; and formal string features such as word-based edit distance. Use of this technique dramatically reduces the Alignment Error Rate of the extracted corpora over heuristic methods based on position of the sentences in the text.",
"title": ""
},
{
"docid": "52c9ee7e057ff9ade5daf44ea713e889",
"text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.",
"title": ""
},
{
"docid": "2a827e858bf93cd5edba7feb3c0448f9",
"text": "Kinetic analyses (joint moments, powers and work) of the lower limbs were performed during normal walking to determine what further information can be gained from a three-dimensional model over planar models. It was to be determined whether characteristic moment and power profiles exist in the frontal and transverse planes across subjects and how much work was performed in these planes. Kinetic profiles from nine subjects were derived using a three-dimensional inverse dynamics model of the lower limbs and power profiles were then calculated by a dot product of the angular velocities and joint moments resolved in a global reference system. Characteristic joint moment profiles across subjects were found for the hip, knee and ankle joints in all planes except for the ankle frontal moment. As expected, the major portion of work was performed in the plane of progression since the goal of locomotion is to support the body against gravity while generating movements which propel the body forward. However, the results also showed that substantial work was done in the frontal plane by the hip during walking (23% of the total work at that joint). The characteristic joint profiles suggest defined motor patterns and functional roles in the frontal and transverse planes. Kinetic analysis in three dimensions is necessary particularly if the hip joint is being examined as a substantial amount of work was done in the frontal plane of the hip to control the pelvis and trunk against gravitational forces.",
"title": ""
},
{
"docid": "34bd41f7384d6ee4d882a39aec167b3e",
"text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "d563b025b084b53c30afba4211870f2d",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "5399b924cdf1d034a76811360b6c018d",
"text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.",
"title": ""
},
{
"docid": "485b48bb7b489d2be73de84994a16e42",
"text": "This paper presents Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Conflux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes.",
"title": ""
},
{
"docid": "73e398a5ae434dbd2a10ddccd2cfb813",
"text": "Face alignment aims to estimate the locations of a set of landmarks for a given image. This problem has received much attention as evidenced by the recent advancement in both the methodology and performance. However, most of the existing works neither explicitly handle face images with arbitrary poses, nor perform large-scale experiments on non-frontal and profile face images. In order to address these limitations, this paper proposes a novel face alignment algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for a face image with an arbitrary pose. By integrating a 3D point distribution model, a cascaded coupled-regressor approach is designed to estimate both the camera projection matrix and the 3D landmarks. Furthermore, the 3D model also allows us to automatically estimate the 2D landmark visibilities via surface normal. We use a substantially larger collection of all-pose face images to evaluate our algorithm and demonstrate superior performances than the state-of-the-art methods.",
"title": ""
},
{
"docid": "e7b7c37a340b4a22dddff59fc6651218",
"text": "Different types of printing methods have recently attracted interest as emerging technologies for fabrication of drug delivery systems. If printing is combined with different oral film manufacturing technologies such as solvent casting and other techniques, multifunctional structures can be created to enable further complexity and high level of sophistication. This review paper intends to provide profound understanding and future perspectives for the potential use of printing technologies in the preparation of oral film formulations as novel drug delivery systems. The described concepts include advanced multi-layer coatings, stacked systems, and integrated bioactive multi-compartments, which comprise of integrated combinations of diverse materials to form sophisticated bio-functional constructs. The advanced systems enable tailored dosing for individual drug therapy, easy and safe manufacturing of high-potent drugs, development and manufacturing of fixed-dose combinations and product tracking for anti-counterfeiting strategies.",
"title": ""
},
{
"docid": "6082c0252dffe7903512e36f13da94eb",
"text": "Thousands of storage tanks in oil refineries have to be inspected manually to prevent leakage and/or any other potential catastrophe. A wall climbing robot with permanent magnet adhesion mechanism equipped with nondestructive sensor has been designed. The robot can be operated autonomously or manually. In autonomous mode the robot uses an ingenious coverage algorithm based on distance transform function to navigate itself over the tank surface in a back and forth motion to scan the external wall for the possible faults using sensors without any human intervention. In manual mode the robot can be navigated wirelessly from the ground station to any location of interest. Preliminary experiment has been carried out to test the prototype.",
"title": ""
},
{
"docid": "45a15455945fdd03ee726b285b8dd75a",
"text": "The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N logN) operations rather than O(N2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in twoand threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPU time (independent of dimension).",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "ce72785681a085be7f947ab6fa787b79",
"text": "A computationally implemented model of the transmission of linguistic behavior over time is presented. In this model [the iterated learning model (ILM)], there is no biological evolution, natural selection, nor any measurement of the success of the agents at communicating (except for results-gathering purposes). Nevertheless, counter to intuition, significant evolution of linguistic behavior is observed. From an initially unstructured communication system (a protolanguage), a fully compositional syntactic meaning-string mapping emerges. Furthermore, given a nonuniform frequency distribution over a meaning space and a production mechanism that prefers short strings, a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.",
"title": ""
},
{
"docid": "7ba37f2dcf95f36727e1cd0f06e31cc0",
"text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.",
"title": ""
},
{
"docid": "343115505ad21c973475c12c3657d82c",
"text": "New transportation fuels are badly needed to reduce our heavy dependence on imported oil and to reduce the release of greenhouse gases that cause global climate change; cellulosic biomass is the only inexpensive resource that can be used for sustainable production of the large volumes of liquid fuels that our transportation sector has historically favored. Furthermore, biological conversion of cellulosic biomass can take advantage of the power of biotechnology to take huge strides toward making biofuels cost competitive. Ethanol production is particularly well suited to marrying this combination of need, resource, and technology. In fact, major advances have already been realized to competitively position cellulosic ethanol with corn ethanol. However, although biotechno logy presents important opportunities to achieve very low costs, pretreatment of naturally resistant cellulosic mate rials is essential if we are to achieve high yields from biological operations; this operation is projected to be the single, most expensive processing step, representing about 20% of the total cost. In addition, pretreatment has pervasive impacts on all other major operations in the overall conversion scheme from choice of feedstock through to size reduction, hydrolysis, and fermentation, and on to product recovery, residue processing, and co-product potential. A number of different pretreatments involving biological, chemical, physical, and thermal approaches have been investigated over the years, but only those that employ chemicals currently offer the high yields and low costs vital to economic success. Among the most promising are pretreatments using dilute acid, sulfur dioxide, near-neutral pH control, ammonia expansion, aqueous ammonia, and lime, with signifi cant differences among the sugar-release patterns. Although projected costs for these options are similar when applied to corn stover, a key need now is to dramatically improve our knowledge of these systems with the goal of advancing pretreatment to substantially reduce costs and to accelerate commercial applications. © 2007 Society of Chemical Industry and John Wiley & Sons, Ltd",
"title": ""
},
{
"docid": "0cccb226bb72be281ead8c614bd46293",
"text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.",
"title": ""
},
{
"docid": "33c5ddb4633cc09c87b8ee26d7c54e51",
"text": "INTRODUCTION\nAdvances in technology have revolutionized the medical field and changed the way healthcare is delivered. Unmanned aerial vehicles (UAVs) are the next wave of technological advancements that have the potential to make a huge splash in clinical medicine. UAVs, originally developed for military use, are making their way into the public and private sector. Because they can be flown autonomously and can reach almost any geographical location, the significance of UAVs are becoming increasingly apparent in the medical field.\n\n\nMATERIALS AND METHODS\nWe conducted a comprehensive review of the English language literature via the PubMed and Google Scholar databases using search terms \"unmanned aerial vehicles,\" \"UAVs,\" and \"drone.\" Preference was given to clinical trials and review articles that addressed the keywords and clinical medicine.\n\n\nRESULTS\nPotential applications of UAVs in medicine are broad. Based on articles identified, we grouped UAV application in medicine into three categories: (1) Prehospital Emergency Care; (2) Expediting Laboratory Diagnostic Testing; and (3) Surveillance. Currently, UAVs have been shown to deliver vaccines, automated external defibrillators, and hematological products. In addition, they are also being studied in the identification of mosquito habitats as well as drowning victims at beaches as a public health surveillance modality.\n\n\nCONCLUSIONS\nThese preliminary studies shine light on the possibility that UAVs may help to increase access to healthcare for patients who may be otherwise restricted from proper care due to cost, distance, or infrastructure. As with any emerging technology and due to the highly regulated healthcare environment, the safety and effectiveness of this technology need to be thoroughly discussed. Despite the many questions that need to be answered, the application of drones in medicine appears to be promising and can both increase the quality and accessibility of healthcare.",
"title": ""
}
] |
scidocsrr
|
37daee87cefd6eabae129bc0df7338dd
|
Blockchain distributed ledger technologies for biomedical and health care applications
|
[
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "8780b620d228498447c4f1a939fa5486",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
}
] |
[
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "45a098c09a3803271f218fafd4d951cd",
"text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.",
"title": ""
},
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "96363ec5134359b5bf7c8b67f67971db",
"text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.",
"title": ""
},
{
"docid": "6b19d08c9aa6ecfec27452a298353e1f",
"text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.",
"title": ""
},
{
"docid": "1d11060907f0a2c856fdda9152b107e5",
"text": "NOTICE This report was prepared by Columbia University in the course of performing work contracted for and sponsored by the New York State Energy Research and Development Authority (hereafter \" NYSERDA \"). The opinions expressed in this report do not necessarily reflect those of NYSERDA or the State of New York, and reference to any specific product, service, process, or method does not constitute an implied or expressed recommendation or endorsement of it. Further, NYSERDA, the State of New York, and the contractor make no warranties or representations, expressed or implied, as to the fitness for particular purpose or merchantability of any product, apparatus, or service, or the usefulness, completeness, or accuracy of any processes, methods, or other information contained, described, disclosed, or referred to in this report. NYSERDA, the State of New York, and the contractor make no representation that the use of any product, apparatus, process, method, or other information will not infringe privately owned rights and will assume no liability for any loss, injury, or damage resulting from, or occurring in connection with, the use of information contained, described, disclosed, or referred to in this report. iii ABSTRACT A research project was conducted to develop a concrete material that contains recycled waste glass and reprocessed carpet fibers and would be suitable for precast concrete wall panels. Post-consumer glass and used carpets constitute major solid waste components. Therefore their beneficial use will reduce the pressure on scarce landfills and the associated costs to taxpayers. By identifying and utilizing the special properties of these recycled materials, it is also possible to produce concrete elements with improved esthetic and thermal insulation properties. Using recycled waste glass as substitute for natural aggregate in commodity products such as precast basement wall panels brings only modest economic benefits at best, because sand, gravel, and crushed stone are fairly inexpensive. However, if the esthetic properties of the glass are properly exploited, such as in building façade elements with architectural finishes, the resulting concrete panels can compete very effectively with other building materials such as natural stone. As for recycled carpet fibers, the intent of this project was to exploit their thermal properties in order to increase the thermal insulation of concrete wall panels. In this regard, only partial success was achieved, because commercially reprocessed carpet fibers improve the thermal properties of concrete only marginally, as compared with other methods, such as the use of …",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "2c39f8c440a89f72db8814e633cb5c04",
"text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.",
"title": ""
},
{
"docid": "b2f1ec4d8ac0a8447831df4287271c35",
"text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.",
"title": ""
},
{
"docid": "5c05ad44ac2bf3fb26cea62d563435f8",
"text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"title": ""
},
{
"docid": "c4387f3c791acc54d0a0655221947c8b",
"text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "7f6b4a74f88d5ae1a4d21948aac2e260",
"text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.",
"title": ""
},
{
"docid": "a81e4507632505b64f4839a1a23fa440",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "45f1964932b06f23b7b0556bfb4d2d24",
"text": "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.",
"title": ""
},
{
"docid": "66cde02bdf134923ca7ef3ec5c4f0fb8",
"text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.",
"title": ""
},
{
"docid": "7eea90d85df0245eac0de51702efdbfd",
"text": "Mobile wellness application is widely used for assisting self-monitoring practice to monitor user's daily food intake and physical activities. Although these mostly free downloadable mobile application is easy to use and covers many aspects of wellness routines, there is no proof of prolonged use. Previous research reported that user will stop using the application and turned back into their old attitude of food consumptions. The purpose of this study is to examine the factors that influence the continuance intention to adopt a mobile phone wellness application. Review of Information System Continuance Model in the areas such as mobile health, mobile phone wellness application, social network and web 2.0, were done to examine the existing factors. From the critical review, two external factors namely Social Norm and Perceive Interactivity is believed to have the ability to explain the social perspective behavior and also the effect of perceiving interactivity towards prolong usage of wellness mobile application. These findings contribute to the development of the Mobile Phones Wellness Application Continuance Use theoretical model.",
"title": ""
},
{
"docid": "3cdca28361b7c2b9525b476e9073fc10",
"text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "89e88b92adc44176f0112a66ec92515a",
"text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.",
"title": ""
},
{
"docid": "e8215231e8eb26241d5ac8ac5be4b782",
"text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.",
"title": ""
}
] |
scidocsrr
|
bc66ec751e7ce368347c821c4b761d56
|
Smart Cars on Smart Roads : Problems of Control
|
[
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
}
] |
[
{
"docid": "dfbe5a92d45d4081910b868d78a904d0",
"text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.",
"title": ""
},
{
"docid": "5f7aa812dc718de9508b083320c67e8a",
"text": "High power multi-level converters are deemed as the mainstay power conversion technology for renewable energy systems including the PV farm, energy storage system and electrical vehicle charge station. This paper is focused on the modeling and design of coupled and integrated magnetics in three-level DC/DC converter with multi-phase interleaved structure. The interleaved phase legs offer the benefit of output current ripple reduction, while inversed coupled inductors can suppress the circulating current between phase legs. To further reduce the magnetic volume, the four inductors in two-phase three-level DC/DC converter are integrated into one common structure, incorporating the negative coupling effects. Because of the nonlinearity of the inductor coupling, the equivalent circuit model is developed for the proposed interleaving structure to facilitate the design optimization of the integrated system. The model identifies the existence of multiple equivalent inductances during one switching cycle. A combination of them determines the inductor current ripple and dynamics of the system. By virtue of inverse coupling and means of controlling the coupling coefficients, one can minimize the current ripple and the unwanted circulating current. The fabricated prototype of the integrated coupled inductors is tested with a two-phase three-level DC/DC converter hardware, showing its good current ripple reduction performance as designed.",
"title": ""
},
{
"docid": "7b89e1ac1dcdcc1f3897e672fd934a40",
"text": "A 61-year-old female with long-standing constipation presented with increasing abdominal distention, pain, nausea and weight loss. She had been previously treated with intermittent fiber supplements and osmotic laxatives for chronic constipation. She did not use medications known to cause delayed bowel transit. Examination revealed a distended abdomen, hard stool in the rectum, and audible heart sounds throughout the abdomen. A CT scan showed severe colonic distention from stool (Fig. 1). She had no mechanical, infectious, metabolic, or endocrine-related etiology for constipation. After failing conservative management including laxative suppositories, enemas, manual disimpaction, methylnaltrexone and neostigmine, the patient underwent a colectomy with Hartmann pouch and terminal ileostomy. The removed colon measured 25.5 cm in largest diameter and weighed over 15 kg (Fig. 2). The histopathological examination demonstrated no neuronal degeneration, apoptosis or agangliosis to suggest Hirschprung’s disease or another intrinsic neuro-muscular disorder. Idiopathic megacolon is a relatively uncommon condition usually associated with slow-transit constipation. Although medical therapy is frequently ineffective, rectal laxatives, gentle enemas, and manual disimpaction can be attempted. Oral osmotic or secretory laxatives as well as unprepped lower endoscopy are relative contraindications as they may precipitate a perforation. Surgical therapy is often required as most cases are refractory to medical therapy.",
"title": ""
},
{
"docid": "8f7a27b88a29fd915e198962d8cd17ad",
"text": "For embedded high resolution successive approximation ADCs, it is necessary to determine the performance limitation of the CMOS process used for the design. This paper presents a modelling technique for major limitations, i.e. capacitor mismatch and non-linearity effects. The model is besed on Monte Carlo simulations applied to an analytical description of the ADC. Additional effects like charge injection and parasitic capacitance are included. The analytical basis covers different architectures with a fully binary weighted or series-split capacitor array. when comparing our analysis and measurement results to several conventional approaches, a significantly more realistic estimation of the attainable resolution is achieved. The presented results provide guidance in choosing process and circuit structure for the design of SAR ADCs. The model also enbles reliable capacitor sizing early in the design process, i.e. well before actual layout implementation.",
"title": ""
},
{
"docid": "c1d436c01088c2295b35a1a37e922bee",
"text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.",
"title": ""
},
{
"docid": "afe44962393bf0d250571f7cd7e82677",
"text": "Analytics is a field of research and practice that aims to reveal new patterns of information through the collection of large sets of data held in previously distinct sources. Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. The challenges of applying analytics on academic and ethical reliability to control over data. The other challenge is that the educational landscape is extremely turbulent at present, and key challenge is the appropriate collection, protection and use of large data sets. This paper brings out challenges of multi various pertaining to the domain by offering a big data model for higher education system.",
"title": ""
},
{
"docid": "004da753abb6cb84f1ba34cfb4dacc67",
"text": "The aim of this study was to present a method for endodontic management of a maxillary first molar with unusual C-shaped morphology of the buccal root verified by cone-beam computed tomography (CBCT) images. This rare anatomical variation was confirmed using CBCT, and nonsurgical endodontic treatment was performed by meticulous evaluation of the pulpal floor. Posttreatment image revealed 3 independent canals in the buccal root obturated efficiently to the accepted lengths in all 3 canals. Our study describes a unique C-shaped variation of the root canal system in a maxillary first molar, involving the 3 buccal canals. In addition, our study highlights the usefulness of CBCT imaging for accurate diagnosis and management of this unusual canal morphology.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "2d6225b20cf13d2974ce78877642a2f7",
"text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.",
"title": ""
},
{
"docid": "f53d8be1ec89cb8a323388496d45dcd0",
"text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.",
"title": ""
},
{
"docid": "08c26880862b09e81acc1cd99904fded",
"text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.",
"title": ""
},
{
"docid": "52dbfe369d1875c402220692ef985bec",
"text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.",
"title": ""
},
{
"docid": "1967de1be0b095b4a59a5bb0fdc403c0",
"text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.",
"title": ""
},
{
"docid": "33c449dc56b7f844e1582bd61d87a8a4",
"text": "We can determine whether two texts are paraphrases of each other by finding out the extent to which the texts are similar. The typical lexical matching technique works by matching the sequence of tokens between the texts to recognize paraphrases, and fails when different words are used to convey the same meaning. We can improve this simple method by combining lexical with syntactic or semantic representations of the input texts. The present work makes use of syntactical information in the texts and computes the similarity between them using word similarity measures based on WordNet and lexical databases. The texts are converted into a unified semantic structural model through which the semantic similarity of the texts is obtained. An approach is presented to assess the semantic similarity and the results of applying this approach is evaluated using the Microsoft Research Paraphrase (MSRP) Corpus.",
"title": ""
},
{
"docid": "5621d7df640dbe3d757ebb600486def9",
"text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.",
"title": ""
},
{
"docid": "bab246f8b15931501049862066fde77f",
"text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.",
"title": ""
},
{
"docid": "ac82ad870c787e759d08b1a80dc51bd2",
"text": "We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L1 regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn \"well,\") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L1 regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm---including logistic regression with L2 regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features.",
"title": ""
},
{
"docid": "87c09def017d5e32f06a887e5d06b0ff",
"text": "A blade element momentum theory propeller model is coupled with a commercial RANS solver. This allows the fully appended self propulsion of the autonomous underwater vehicle Autosub 3 to be considered. The quasi-steady propeller model has been developed to allow for circumferential and radial variations in axial and tangential inflow. The non-uniform inflow is due to control surface deflections and the bow-down pitch of the vehicle in cruise condition. The influence of propeller blade Reynolds number is included through the use of appropriate sectional lift and drag coefficients. Simulations have been performed over the vehicles operational speed range (Re = 6.8× 10 to 13.5× 10). A workstation is used for the calculations with mesh sizes up to 2x10 elements. Grid uncertainty is calculated to be 3.07% for the wake fraction. The initial comparisons with in service data show that the coupled RANS-BEMT simulation under predicts the drag of the vehicle and consequently the required propeller rpm. However, when an appropriate correction is made for the effect on resistance of various protruding sensors the predicted propulsor rpm matches well with that of in-service rpm measurements for vessel speeds (1m/s 2m/s). The developed analysis captures the important influence of the propeller blade and hull Reynolds number on overall system efficiency.",
"title": ""
},
{
"docid": "57bec1f2ee904f953463e4e41e2cb688",
"text": "Graph embedding is an important branch in Data Mining and Machine Learning, and most of recent studies are focused on preserving the hierarchical structure with less dimensions. One of such models, called Poincare Embedding, achieves the goal by using Poincare Ball model to embed hierarchical structure in hyperbolic space instead of traditionally used Euclidean space. However, Poincare Embedding suffers from two major problems: (1) performance drops as depth of nodes increases since nodes tend to lay at the boundary; (2) the embedding model is constrained with pre-constructed structures and cannot be easily extended. In this paper, we first raise several techniques to overcome the problem of low performance for deep nodes, such as using partial structure, adding regularization, and exploring sibling relations in the structure. Then we also extend the Poincare Embedding model by extracting information from text corpus and propose a joint embedding model with Poincare Embedding and Word2vec.",
"title": ""
},
{
"docid": "6228498fed5b26c0def578251aa1c749",
"text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.",
"title": ""
}
] |
scidocsrr
|
111e970b027530331ee4320b8ecbc49f
|
Selection of K in K-means clustering
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "3e44a5c966afbeabff11b54bafcefdce",
"text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.",
"title": ""
},
{
"docid": "651d048aaae1ce1608d3d9f0f09d4b9b",
"text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.",
"title": ""
}
] |
[
{
"docid": "1e042aca14a3412a4772761109cb6c10",
"text": "With increasing quality requirements for multimedia communications, audio codecs must maintain both high quality and low delay. Typically, audio codecs offer either low delay or high quality, but rarely both. We propose a codec that simultaneously addresses both these requirements, with a delay of only 8.7 ms at 44.1 kHz. It uses gain-shape algebraic vector quantization in the frequency domain with time-domain pitch prediction. We demonstrate that the proposed codec operating at 48 kb/s and 64 kb/s out-performs both G.722.1C and MP3 and has quality comparable to AAC-LD, despite having less than one fourth of the algorithmic delay of these codecs.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "9c7d3937b25c6be6480d52dec14bb4d5",
"text": "Worldwide the pros and cons of games and social behaviour are discussed. In Western countries the discussion is focussing on violent game and media content; in Japan on intensive game usage and the impact on the intellectual development of children. A lot is already discussed on the harmful and negative effects of entertainment technology on human behaviour, therefore we decided to focus primarily on the positive effects. Based on an online document search we could find and select 393 online available publications according the following categories: meta review (N=34), meta analysis (N=13), literature review (N=38), literature survey (N=36), empirical study (N=91), survey study (N=44), design study (N=91), any other document (N=46). In this paper a first preliminary overview over positive effects of entertainment technology on human behaviour is presented and discussed. The drawn recommendations can support developers and designers in entertainment industry.",
"title": ""
},
{
"docid": "9a86609ecefc5780a49ca638be4de64c",
"text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.",
"title": ""
},
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
},
{
"docid": "e1f531740891d47387a2fc2ef4f71c46",
"text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"title": ""
},
{
"docid": "6c4944ebd75404a0f3b2474e346677f1",
"text": "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.",
"title": ""
},
{
"docid": "bc6fc806fefc8298b8969f7a5f5b9e8b",
"text": "Short text is usually expressed in refined slightly, insufficient information, which makes text classification difficult. But we can try to introduce some information from the existing knowledge base to strengthen the performance of short text classification. Wikipedia [2,13,15] is now the largest human-edited knowledge base of high quality. It would benefit to short text classification if we can make full use of Wikipedia information in short text classification. This paper presents a new concept based [22] on Wikipedia short text representation method, by identifying the concept of Wikipedia mentioned in short text, and then expand the concept of wiki correlation and short text messages to the feature vector representation.",
"title": ""
},
{
"docid": "50e7ca7394db235909d657495bb11de2",
"text": "Radar is an attractive technology for long term monitoring of human movement as it operates remotely, can be placed behind walls and is able to monitor a large area depending on its operating parameters. A radar signal reflected off a moving person carries rich information on his or her activity pattern in the form of a set of Doppler frequency signatures produced by the specific combination of limbs and torso movements. To enable classification and efficient storage and transmission of movement data, unique parameters have to be extracted from the Doppler signatures. Two of the most important human movement parameters for activity identification and classification are the velocity profile and the fundamental cadence frequency of the movement pattern. However, the complicated pattern of limbs and torso movement worsened by multipath propagation in indoor environment poses a challenge for the extraction of these human movement parameters. In this paper, three new approaches for the estimation of human walking velocity profile in indoor environment are proposed and discussed. The first two methods are based on spectrogram estimates whereas the third method is based on phase difference computation. In addition, a method to estimate the fundamental cadence frequency of the gait is suggested and discussed. The accuracy of the methods are evaluated and compared in an indoor experiment using a flexible and low-cost software defined radar platform. The results obtained indicate that the velocity estimation methods are able to estimate the velocity profile of the person’s translational motion with an error of less than 10%. The results also showed that the fundamental cadence is estimated with an error of 7%.",
"title": ""
},
{
"docid": "90d5aca626d61806c2af3cc551b28c90",
"text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.",
"title": ""
},
{
"docid": "a70475e2799b0a439e63382abcd90bd4",
"text": "Nonabelian group-based public key cryptography is a relatively new and exciting research field. Rapidly increasing computing power and the futurity quantum computers [52] that have since led to, the security of public key cryptosystems in use today, will be questioned. Research in new cryptographic methods is also imperative. Research on nonabelian group-based cryptosystems will become one of contemporary research priorities. Many innovative ideas for them have been presented for the past two decades, and many corresponding problems remain to be resolved. The purpose of this paper, is to present a survey of the nonabelian group-based public key cryptosystems with the corresponding problems of security. We hope that readers can grasp the trend that is examined in this study.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "3b2ddbef9ee3e5db60e2b315064a02c3",
"text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.",
"title": ""
},
{
"docid": "d0bb31d79a7c93f67f7d11d6abee50cb",
"text": "The chapter introduces the book explaining its purposes and significance, framing it within the current literature related to Location-Based Mobile Games. It further clarifies the methodology of the study on the ground of this work and summarizes the content of each chapter.",
"title": ""
},
{
"docid": "b73c1b51f0f74c3b27b8d3d58c14e600",
"text": "Water balance of the terrestrial isopod, Armadillidium vulgare, was investigated during conglobation (rolling-up behavior). Water loss and metabolic rates were measured at 18 +/- 1 degrees C in dry air using flow-through respirometry. Water-loss rates decreased 34.8% when specimens were in their conglobated form, while CO2 release decreased by 37.1%. Water loss was also measured gravimetrically at humidities ranging from 6 to 75 %RH. Conglobation was associated with a decrease in water-loss rates up to 53 %RH, but no significant differences were observed at higher humidities. Our findings suggest that conglobation behavior may help to conserve water, in addition to its demonstrated role in protection from predation.",
"title": ""
},
{
"docid": "1d04def7d22e9f915d825551aa10b077",
"text": "Recent advances in wireless networking technologies and the growing success of mobile computing devices, such as laptop computers, third generation mobile phones, personal digital assistants, watches and the like, are enabling new classes of applications that present challenging problems to designers. Mobile devices face temporary loss of network connectivity when they move; they are likely to have scarce resources, such as low battery power, slow CPU speed and little memory; they are required to react to frequent and unannounced changes in the environment, such as high variability of network bandwidth, and in the remote resources availability, and so on. To support designers building mobile applications, research in the field of middleware systems has proliferated. Middleware aims at facilitating communication and coordination of distributed components, concealing difficulties raised by mobility from application engineers as much as possible. In this survey, we examine characteristics of mobile distributed systems and distinguish them from their fixed counterpart. We introduce a framework and a categorization of the various middleware systems designed to support mobility, and we present a detailed and comparative review of the major results reached in this field. An analysis of current trends inside the mobile middleware community and a discussion of further directions of research conclude the survey.",
"title": ""
},
{
"docid": "27ea4d25d672b04632c53c711afe0ceb",
"text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.",
"title": ""
},
{
"docid": "c9398b3dad75ba85becbec379a65a219",
"text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.",
"title": ""
},
{
"docid": "ae8f26a5ab75e11f86d295c2beaa2189",
"text": "BACKGROUND\nThe neonatal and pediatric antimicrobial point prevalence survey (PPS) of the Antibiotic Resistance and Prescribing in European Children project (http://www.arpecproject.eu/) aims to standardize a method for surveillance of antimicrobial use in children and neonates admitted to the hospital within Europe. This article describes the audit criteria used and reports overall country-specific proportions of antimicrobial use. An analytical review presents methodologies on antimicrobial use.\n\n\nMETHODS\nA 1-day PPS on antimicrobial use in hospitalized children was organized in September 2011, using a previously validated and standardized method. The survey included all inpatient pediatric and neonatal beds and identified all children receiving an antimicrobial treatment on the day of survey. Mandatory data were age, gender, (birth) weight, underlying diagnosis, antimicrobial agent, dose and indication for treatment. Data were entered through a web-based system for data-entry and reporting, based on the WebPPS program developed for the European Surveillance of Antimicrobial Consumption project.\n\n\nRESULTS\nThere were 2760 and 1565 pediatric versus 1154 and 589 neonatal inpatients reported among 50 European (n = 14 countries) and 23 non-European hospitals (n = 9 countries), respectively. Overall, antibiotic pediatric and neonatal use was significantly higher in non-European (43.8%; 95% confidence interval [CI]: 41.3-46.3% and 39.4%; 95% CI: 35.5-43.4%) compared with that in European hospitals (35.4; 95% CI: 33.6-37.2% and 21.8%; 95% CI: 19.4-24.2%). Proportions of antibiotic use were highest in hematology/oncology wards (61.3%; 95% CI: 56.2-66.4%) and pediatric intensive care units (55.8%; 95% CI: 50.3-61.3%).\n\n\nCONCLUSIONS\nAn Antibiotic Resistance and Prescribing in European Children standardized web-based method for a 1-day PPS was successfully developed and conducted in 73 hospitals worldwide. It offers a simple, feasible and sustainable way of data collection that can be used globally.",
"title": ""
}
] |
scidocsrr
|
5d0c7a76bcf5ff7fb4c681a1bd5496d1
|
GPS Spoofing Detection Based on Decision Fusion with a K-out-of-N Rule
|
[
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
},
{
"docid": "531d387a14eefa6a8c45ad64039f29be",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
}
] |
[
{
"docid": "f905016b422d9c16ac11b85182f196c7",
"text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.",
"title": ""
},
{
"docid": "fb5e9a15429c9361dbe577ca8db18e46",
"text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.",
"title": ""
},
{
"docid": "9efa0ff0743edacc4e9421ed45441fde",
"text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.",
"title": ""
},
{
"docid": "361bc333d47d2e1d4b6a6e8654d2659d",
"text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.",
"title": ""
},
{
"docid": "0b0e935d88fb5eb6b964e7e0853a7f2f",
"text": "Skill prerequisite information is useful for tutoring systems that assess student knowledge or that provide remediation. These systems often encode prerequisites as graphs designed by subject matter experts in a costly and time-consuming process. In this paper, we introduce Combined student Modeling and prerequisite Discovery (COMMAND), a novel algorithm for jointly inferring a prerequisite graph and a student model from data. Learning a COMMAND model requires student performance data and a mapping of items to skills (Q-matrix). COMMAND learns the skill prerequisite relations as a Bayesian network (an encoding of the probabilistic dependence among the skills) via a two-stage learning process. In the first stage, it uses an algorithm called Structural Expectation Maximization to select a class of equivalent Bayesian networks; in the second stage, it uses curriculum information to select a single Bayesian network. Our experiments on simulations and real student data suggest that COMMAND is better than prior methods in the literature.",
"title": ""
},
{
"docid": "6ad344c7049abad62cd53dacc694c651",
"text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.",
"title": ""
},
{
"docid": "6325188ee21b6baf65dbce6855c19bc2",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
},
{
"docid": "57f5b00d796489b7f5caee701ce3116b",
"text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.",
"title": ""
},
{
"docid": "ae151d8ed9b8f99cfe22e593f381dd3b",
"text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.",
"title": ""
},
{
"docid": "4621856b479672433f9f9dff86d4f4da",
"text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.",
"title": ""
},
{
"docid": "6d262139067d030c3ebb1169e93c6422",
"text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.",
"title": ""
},
{
"docid": "f825dbbc9ff17178a81be71c5b9312ae",
"text": "Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.",
"title": ""
},
{
"docid": "1a834cb0c5d72c6bc58c4898d318cfc2",
"text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "fdb0c8d2a4c4bbe68b7cffe58adbd074",
"text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.",
"title": ""
},
{
"docid": "055c9fad6d2f246fc1b6cbb1bce26a92",
"text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.",
"title": ""
},
{
"docid": "43db7c431cac1afd33f48774ee0dbc61",
"text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.",
"title": ""
},
{
"docid": "04ed876237214c1366f966b80ebb7fd4",
"text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.",
"title": ""
},
{
"docid": "96e9c66453ba91d1bc44bb0242f038ce",
"text": "Body temperature is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of non-invasive neonatal temperature monitoring with wearable sensors. A negative temperature coefficient (NTC) resistor is applied as the temperature sensor due to its accuracy and small size. Conductive textile wires are used to make the sensor integration compatible for a wearable non-invasive monitoring platform, such as a neonatal smart jacket. Location of the sensor, materials and appearance are designed to optimize the functionality, patient comfort and the possibilities for aesthetic features. A prototype belt is built of soft bamboo fabrics with NTC sensor integrated to demonstrate the temperature monitoring. Experimental results from the testing on neonates at NICU of Máxima Medical Center (MMC), Veldhoven, the Netherlands, show the accurate temperature monitoring by the prototype belt comparing with the standard patient monitor.",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
}
] |
scidocsrr
|
8e6e49e6cb0f4d85f4018da85bfadc80
|
Bagging, Boosting and the Random Subspace Method for Linear Classifiers
|
[
{
"docid": "00ea9078f610b14ed0ed00ed6d0455a7",
"text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.",
"title": ""
}
] |
[
{
"docid": "3606b1c9bc5003c6119a5cc675ad63f4",
"text": "Hypothyroidism is a clinical disorder commonly encountered by the primary care physician. Untreated hypothyroidism can contribute to hypertension, dyslipidemia, infertility, cognitive impairment, and neuromuscular dysfunction. Data derived from the National Health and Nutrition Examination Survey suggest that about one in 300 persons in the United States has hypothyroidism. The prevalence increases with age, and is higher in females than in males. Hypothyroidism may occur as a result of primary gland failure or insufficient thyroid gland stimulation by the hypothalamus or pituitary gland. Autoimmune thyroid disease is the most common etiology of hypothyroidism in the United States. Clinical symptoms of hypothyroidism are nonspecific and may be subtle, especially in older persons. The best laboratory assessment of thyroid function is a serum thyroid-stimulating hormone test. There is no evidence that screening asymptomatic adults improves outcomes. In the majority of patients, alleviation of symptoms can be accomplished through oral administration of synthetic levothyroxine, and most patients will require lifelong therapy. Combination triiodothyronine/thyroxine therapy has no advantages over thyroxine monotherapy and is not recommended. Among patients with subclinical hypothyroidism, those at greater risk of progressing to clinical disease, and who may be considered for therapy, include patients with thyroid-stimulating hormone levels greater than 10 mIU per L and those who have elevated thyroid peroxidase antibody titers.",
"title": ""
},
{
"docid": "6c175d7a90ed74ab3b115977c82b0ffa",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
},
{
"docid": "8933d92ec139e80ffb8f0ebaa909d76c",
"text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.",
"title": ""
},
{
"docid": "86826e10d531b8d487fada7a5c151a41",
"text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.",
"title": ""
},
{
"docid": "2582b0fffad677d3f0ecf11b92d9702d",
"text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on",
"title": ""
},
{
"docid": "13afc7b4786ee13c6b0bfb1292f50153",
"text": "Heavy metals are discharged into water from various industries. They can be toxic or carcinogenic in nature and can cause severe problems for humans and aquatic ecosystems. Thus, the removal of heavy metals fromwastewater is a serious problem. The adsorption process is widely used for the removal of heavy metals from wastewater because of its low cost, availability and eco-friendly nature. Both commercial adsorbents and bioadsorbents are used for the removal of heavy metals fromwastewater, with high removal capacity. This review article aims to compile scattered information on the different adsorbents that are used for heavy metal removal and to provide information on the commercially available and natural bioadsorbents used for removal of chromium, cadmium and copper, in particular. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/ licenses/by-nc-nd/4.0/). doi: 10.2166/wrd.2016.104 Renu Madhu Agarwal (corresponding author) K. Singh Department of Chemical Engineering, Malaviya National Institute of Technology, JLN Marg, Jaipur 302017, India E-mail: madhunaresh@gmail.com",
"title": ""
},
{
"docid": "b86fed0ebcf017adedbe9f3d14d6903d",
"text": "The general employee scheduling problem extends the standard shift scheduling problem by discarding key limitations such as employee homogeneity and the absence of connections across time period blocks. The resulting increased generality yields a scheduling model that applies to real world problems confronted in a wide variety of areas. The price of the increased generality is a marked increase in size and complexity over related models reported in the literature. The integer programming formulation for the general employee scheduling problem, arising in typical real world settings, contains from one million to over four million zero~ne variables. By contrast, studies of special cases reported over the past decade have focused on problems involving between 100 and 500 variables. We characterize the relationship between the general employee scheduling problem and related problems, reporting computational results for a procedure that solves these more complex problems within 98-99 % optimality and runs on a microcomputer. We view our approach as an integration of management science and artificial intelligence techniques. The benefits of such an integration are suggested by the fact that other zero~ne scheduling implementations reported in the literature, including the one awarded the Lancaster Prize in 1984, have obtained comparable approximations of optimality only for problems from two to three orders of magnitude smaller, and then only by the use of large mainframe computers.",
"title": ""
},
{
"docid": "df0e13e1322a95046a91fb7c867d968a",
"text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.",
"title": ""
},
{
"docid": "fb836666c993b27b99f6c789dd0aae05",
"text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.",
"title": ""
},
{
"docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b",
"text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "92699fa23a516812c7fcb74ba38f42c6",
"text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.",
"title": ""
},
{
"docid": "a94278bafc093c37bcba719a4b6a03fa",
"text": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "e1adb8ebfd548c2aca5110e2a9e8d667",
"text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "2038dbe6e16892c8d37a4dac47d4f681",
"text": "Sentences with different structures may convey the same meaning. Identification of sentences with paraphrases plays an important role in text related research and applications. This work focus on the statistical measures and semantic analysis of Malayalam sentences to detect the paraphrases. The statistical similarity measures between sentences, based on symbolic characteristics and structural information, could measure the similarity between sentences without any prior knowledge but only on the statistical information of sentences. The semantic representation of Universal Networking Language(UNL), represents only the inherent meaning in a sentence without any syntactic details. Thus, comparing the UNL graphs of two sentences can give an insight into how semantically similar the two sentences are. Combination of statistical similarity and semantic similarity score results the overall similarity score. This is the first attempt towards paraphrases of malayalam sentences.",
"title": ""
},
{
"docid": "259e95c8d756f31408d30bbd7660eea3",
"text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.",
"title": ""
},
{
"docid": "49a87829a12168de2be2ee32a23ddeb7",
"text": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users’ levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.",
"title": ""
},
{
"docid": "0b3555b8c1932a2364a7264cbf2f7c25",
"text": "This paper introduces a novel weighted unsupervised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene. Keywords—Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect",
"title": ""
},
{
"docid": "abda48a065aecbe34f86ce3490520402",
"text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
}
] |
scidocsrr
|
cbe333e5804af8a9778780bff57dc255
|
Health Media: From Multimedia Signals to Personal Health Insights
|
[
{
"docid": "e95253b765129a0940e4af899d9e5d72",
"text": "Smart health devices monitor certain health parameters, are connected to an Internet service, and target primarily a lay consumer seeking a healthy lifestyle rather than the medical expert or the chronically ill person. These devices offer tremendous opportunities for wellbeing and self-management of health. This department reviews smart health devices from a pervasive computing perspective, discussing various devices and their functionality, limitations, and potential.",
"title": ""
}
] |
[
{
"docid": "b2058a09b3e83bb864cb238e066c8afb",
"text": "The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.",
"title": ""
},
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "36342d65aaa9dff0339f8c1c8cb23f30",
"text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.",
"title": ""
},
{
"docid": "29e500aa57f82d63596ae13639d46cbf",
"text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.",
"title": ""
},
{
"docid": "cbdfd886416664809046ff2e674f4ae1",
"text": "Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain, which occurs frequently in many real life scenarios. This work focuses on unsupervised domain adaptation, where labeled data are only available in the source domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Further, we introduce a quantitative measure to characterize the shift between two domains, which enables us to select the optimal domain to adapt to the given multiple source domains. We present experiments on face recognition across pose, illumination and blur variations, cross dataset object recognition, and report improved performance over the state of the art.",
"title": ""
},
{
"docid": "cee3c61474bf14158d4abf0c794a9c2a",
"text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva",
"title": ""
},
{
"docid": "947d4c60427377bcb466fe1393c5474c",
"text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.",
"title": ""
},
{
"docid": "413df06d6ba695aa5baa13ea0913c6e6",
"text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.",
"title": ""
},
{
"docid": "269cff08201fd7815e3ea2c9a786d38b",
"text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "8fca64bb24d9adc445fec504ee8efa5a",
"text": "In this paper, the permeation properties of three types of liquids into HTV silicone rubber with different Alumina Tri-hydrate (ATH) contents had been investigated by weight gain experiments. The influence of differing exposure conditions on the diffusion into silicone rubber, in particular the effect of solution type, solution concentration, and test temperature were explored. Experimental results indicated that the liquids permeation into silicone rubber obeyed anomalous diffusion ways instead of the Fick diffusion model. Moreover, higher temperature would accelerate the permeation process, and silicone rubber with higher ATH content absorbed more liquids than that with lower ATH content. Furthermore, the material properties of silicone rubber before and after liquid permeation were examined using Fourier infrared spectroscopy (FTIR), thermal gravimetric analysis (TGA) and scanning electron microscopy (SEM), respectively. The permeation mechanisms and process were discussed in depth by combining the weight gain experiment results and the material properties analyses.",
"title": ""
},
{
"docid": "2e510f3f8055b4936aadf502766e3e0d",
"text": "Process mining techniques have proven to be a valuable tool for analyzing the execution of business processes. They rely on logs that identify events at an activity level, i.e., most process mining techniques assume that the information system explicitly supports the notion of activities/tasks. This is often not the case and only low-level events are being supported and logged. For example, users may provide different pieces of data which together constitute a single activity. The technique introduced in this paper uses clustering algorithms to derive activity logs from lower-level data modification logs, as produced by virtually every information system. This approach was implemented in the context of the ProM framework and its goal is to widen the scope of processes that can be analyzed using existing process mining techniques.",
"title": ""
},
{
"docid": "ac2009434ea592577cdcdbfb51e3213c",
"text": "Pair-wise ranking methods have been widely used in recommender systems to deal with implicit feedback. They attempt to discriminate between a handful of observed items and the large set of unobserved items. In these approaches, however, user preferences and item characteristics cannot be estimated reliably due to overfitting given highly sparse data. To alleviate this problem, in this paper, we propose a novel hierarchical Bayesian framework which incorporates “bag-ofwords” type meta-data on items into pair-wise ranking models for one-class collaborative filtering. The main idea of our method lies in extending the pair-wise ranking with a probabilistic topic modeling. Instead of regularizing item factors through a zero-mean Gaussian prior, our method introduces item-specific topic proportions as priors for item factors. As a by-product, interpretable latent factors for users and items may help explain recommendations in some applications. We conduct an experimental study on a real and publicly available dataset, and the results show that our algorithm is effective in providing accurate recommendation and interpreting user factors and item factors.",
"title": ""
},
{
"docid": "edb7adc3e665aa2126be1849431c9d7f",
"text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.",
"title": ""
},
{
"docid": "03fa5f5f6b6f307fc968a2b543e331a1",
"text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.",
"title": ""
},
{
"docid": "6347b642cec08bf062f6e5594f805bd3",
"text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.",
"title": ""
},
{
"docid": "bb11b0de8915b6f4811cc76dffd6d8b2",
"text": "In this work we introduced SnooperTrack, an algorithm for the automatic detection and tracking of text objects — such as store names, traffic signs, license plates, and advertisements — in videos of outdoor scenes. The purpose is to improve the performances of text detection process in still images by taking advantage of the temporal coherence in videos. We first propose an efficient tracking algorithm using particle filtering framework with original region descriptors. The second contribution is our strategy to merge tracked regions and new detections. We also propose an improved version of our previously published text detection algorithm in still images. Tests indicate that SnooperTrack is fast, robust, enable false positive suppression, and achieved great performances in complex videos of outdoor scenes.",
"title": ""
},
{
"docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9",
"text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.",
"title": ""
},
{
"docid": "59e2564e565ead0bc36f9f691f4f70f3",
"text": "INTRODUCTION In recent years “big data” has become something of a buzzword in business, computer science, information studies, information systems, statistics, and many other fields. As technology continues to advance, we constantly generate an ever-increasing amount of data. This growth does not differentiate between individuals and businesses, private or public sectors, institutions of learning and commercial entities. It is nigh universal and therefore warrants further study.",
"title": ""
}
] |
scidocsrr
|
c205d05981a16dc9ba2c9e74a009d8db
|
Neural Cryptanalysis of Classical Ciphers
|
[
{
"docid": "ff10bbde3ed18eea73375540135f99f4",
"text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.",
"title": ""
},
{
"docid": "f8f1e4f03c6416e9d9500472f5e00dbe",
"text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.",
"title": ""
}
] |
[
{
"docid": "2679d251d413adf208cb8b764ce55468",
"text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.",
"title": ""
},
{
"docid": "e0ec22fcdc92abe141aeb3fa67e9e55a",
"text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack",
"title": ""
},
{
"docid": "1ee1adcfd73e9685eab4e2abd28183c7",
"text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.",
"title": ""
},
{
"docid": "1e31afb6d28b0489e67bb63d4dd60204",
"text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.",
"title": ""
},
{
"docid": "a112a01246256e38b563f616baf02cef",
"text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: krishnan@caltech.edu 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125",
"title": ""
},
{
"docid": "429c6591223007b40ef7bffc5d9ac4db",
"text": "A compact dual-polarized double E-shaped patch antenna with high isolation for pico base station applications is presented in this communication. The proposed antenna employs a stacked configuration composed of two layers of substrate. Two modified E-shaped patches are printed orthogonally on both sides of the upper substrate. Two probes are used to excite the E-shaped patches, and each probe is connected to one patch separately. A circular patch is printed on the lower substrate to broaden the impedance bandwidth. Both simulated and measured results show that the proposed antenna has a port isolation higher than 30 dB over the frequency band of 2.5 GHz - 2.7 GHz, while the return loss is less than - 15 dB within the band. Moreover, stable radiation pattern with a peak gain of 6.8 dBi - 7.4 dBi is obtained within the band.",
"title": ""
},
{
"docid": "7adf46bb0a4ba677e58aee9968d06293",
"text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.",
"title": ""
},
{
"docid": "97f748ee5667ee8c2230e07881574c22",
"text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.",
"title": ""
},
{
"docid": "f9468884fd24ff36b81fc2016a519634",
"text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.",
"title": ""
},
{
"docid": "101af3fab1f8abb4e2b75a067031048a",
"text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al",
"title": ""
},
{
"docid": "13897df01d4c03191dd015a04c3a5394",
"text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author",
"title": ""
},
{
"docid": "07570935aad8a481ea5e9d422c4f80ca",
"text": "Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.",
"title": ""
},
{
"docid": "b4f82364c5c4900058f50325ccc9e4c4",
"text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.",
"title": ""
},
{
"docid": "8b2b8eb2d16b28dac8ec8d4572b8db0e",
"text": "Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.",
"title": ""
},
{
"docid": "fb58d6fe77092be4bce5dd0926c563de",
"text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.",
"title": ""
},
{
"docid": "6c221c4085c6868640c236b4dd72f777",
"text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.",
"title": ""
},
{
"docid": "4c4bfcadd71890ccce9e58d88091f6b3",
"text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
},
{
"docid": "48b88774957a6d30ae9d0a97b9643647",
"text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features",
"title": ""
},
{
"docid": "80a4de6098a4821e52ccc760db2aae18",
"text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities",
"title": ""
}
] |
scidocsrr
|
5e7c2be0d66e726a1d4bd7d249df0187
|
Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy.
|
[
{
"docid": "32b5458ced294a01654f3747273db08d",
"text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).",
"title": ""
}
] |
[
{
"docid": "d364aaa161cc92e28697988012c35c2a",
"text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.",
"title": ""
},
{
"docid": "702df543119d648be859233bfa2b5d03",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ca807d3bed994a8e7492898e6bfe6dd2",
"text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.",
"title": ""
},
{
"docid": "1bf43801d05551f376464d08893b211c",
"text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.",
"title": ""
},
{
"docid": "e85e8b54351247d5f20bf1756a133a08",
"text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .",
"title": ""
},
{
"docid": "7e38ba11e394acd7d5f62d6a11253075",
"text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.",
"title": ""
},
{
"docid": "b5cc41f689a1792b544ac66a82152993",
"text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "174fb8b7cb0f45bed49a50ce5ad19c88",
"text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: haiqiu@uwm.edu (H. Qiu), jaylee@uwm.edu (J. Lee), jinglin@mail.ioc.ac.cn (J. Lin).",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "3dfb419706ae85d232753a085dc145f7",
"text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.",
"title": ""
},
{
"docid": "50906e5d648b7598c307b09975daf2d8",
"text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.",
"title": ""
},
{
"docid": "48eacd86c14439454525e5a570db083d",
"text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.",
"title": ""
},
{
"docid": "3f6cbad208a819fc8fc6a46208197d59",
"text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.",
"title": ""
},
{
"docid": "1afdefb31d7b780bb78b59ca8b0d3d8a",
"text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.",
"title": ""
},
{
"docid": "07348109c7838032850c039f9a463943",
"text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.",
"title": ""
},
{
"docid": "affa48f455d5949564302b4c23324458",
"text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.",
"title": ""
},
{
"docid": "2795c78d2e81a064173f49887c9b1bb1",
"text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.",
"title": ""
},
{
"docid": "fd7c514e8681a5292bcbf2bbf6e75664",
"text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.