query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
42651ce6c198d1123028a4947ef826fc
Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning
[ { "docid": "a77eddf9436652d68093946fbe1d2ed0", "text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "title": "" }, { "docid": "7f3fe1eadb59d58db8e5911c1de3465f", "text": "We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.", "title": "" }, { "docid": "bc4ce5871c530bad6f87708328e08531", "text": "Detecting vehicles in aerial images provides important information for traffic management and urban planning. Detecting the cars in the images is challenging due to the relatively small size of the target objects and the complex background in man-made areas. It is particularly challenging if the goal is near-real-time detection, i.e., within few seconds, on large images without any additional information, e.g., road database and accurate target size. We present a method that can detect the vehicles on a 21-MPixel original frame image without accurate scale information within seconds on a laptop single threaded. In addition to the bounding box of the vehicles, we extract also orientation and type (car/truck) information. First, we apply a fast binary detector using integral channel features in a soft-cascade structure. In the next step, we apply a multiclass classifier on the output of the binary detector, which gives the orientation and type of the vehicles. We evaluate our method on a challenging data set of original aerial images over Munich and a data set captured from an unmanned aerial vehicle (UAV).", "title": "" } ]
[ { "docid": "3606b1c9bc5003c6119a5cc675ad63f4", "text": "Hypothyroidism is a clinical disorder commonly encountered by the primary care physician. Untreated hypothyroidism can contribute to hypertension, dyslipidemia, infertility, cognitive impairment, and neuromuscular dysfunction. Data derived from the National Health and Nutrition Examination Survey suggest that about one in 300 persons in the United States has hypothyroidism. The prevalence increases with age, and is higher in females than in males. Hypothyroidism may occur as a result of primary gland failure or insufficient thyroid gland stimulation by the hypothalamus or pituitary gland. Autoimmune thyroid disease is the most common etiology of hypothyroidism in the United States. Clinical symptoms of hypothyroidism are nonspecific and may be subtle, especially in older persons. The best laboratory assessment of thyroid function is a serum thyroid-stimulating hormone test. There is no evidence that screening asymptomatic adults improves outcomes. In the majority of patients, alleviation of symptoms can be accomplished through oral administration of synthetic levothyroxine, and most patients will require lifelong therapy. Combination triiodothyronine/thyroxine therapy has no advantages over thyroxine monotherapy and is not recommended. Among patients with subclinical hypothyroidism, those at greater risk of progressing to clinical disease, and who may be considered for therapy, include patients with thyroid-stimulating hormone levels greater than 10 mIU per L and those who have elevated thyroid peroxidase antibody titers.", "title": "" }, { "docid": "c071d5a7ff1dbfd775e9ffdee1b07662", "text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.", "title": "" }, { "docid": "4282aecaa7b517a852677194b8db216e", "text": "High-level synthesis (HLS) is increasingly popular for the design of high-performance and energy-efficient heterogeneous systems, shortening time-to-market and addressing today's system complexity. HLS allows designers to work at a higher-level of abstraction by using a software program to specify the hardware functionality. Additionally, HLS is particularly interesting for designing field-programmable gate array circuits, where hardware implementations can be easily refined and replaced in the target device. Recent years have seen much activity in the HLS research community, with a plethora of HLS tool offerings, from both industry and academia. All these tools may have different input languages, perform different internal optimizations, and produce results of different quality, even for the very same input description. Hence, it is challenging to compare their performance and understand which is the best for the hardware to be implemented. We present a comprehensive analysis of recent HLS tools, as well as overview the areas of active interest in the HLS research community. We also present a first-published methodology to evaluate different HLS tools. We use our methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.", "title": "" }, { "docid": "853375477bf531499067eedfe64e6e2d", "text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.", "title": "" }, { "docid": "2f20f587bb46f7133900fd8c22cea3ab", "text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.", "title": "" }, { "docid": "a7b6a491d85ae94285808a21dbc65ce9", "text": "In imbalanced learning, most standard classification algorithms usually fail to properly represent data distribution and provide unfavorable classification performance. More specifically, the decision rule of minority class is usually weaker than majority class, leading to many misclassification of expensive minority class data. Motivated by our previous work ADASYN [1], this paper presents a novel kernel based adaptive synthetic over-sampling approach, named KernelADASYN, for imbalanced data classification problems. The idea is to construct an adaptive over-sampling distribution to generate synthetic minority class data. The adaptive over-sampling distribution is first estimated with kernel density estimation methods and is further weighted by the difficulty level for different minority class data. The classification performance of our proposed adaptive over-sampling approach is evaluated on several real-life benchmarks, specifically on medical and healthcare applications. The experimental results show the competitive classification performance for many real-life imbalanced data classification problems.", "title": "" }, { "docid": "837c34e3999714c0aa0dcf901aa278cf", "text": "A novel high temperature superconducting interdigital bandpass filter is proposed by using coplanar waveguide quarter-wavelength resonators. The CPW resonators are arranged in parallel, and consequently the filter becomes very compact. The filter is a 5-pole Chebyshev BPF with a midband frequency of 5.0GHz and an equal-ripple fractional bandwidth of 3.2%. It is fabricated using a YBCO film deposited on an MgO substrate. The measured filtering characteristics agree well with EM simulations and show a low insertion loss in spite of the small size of the filter.", "title": "" }, { "docid": "3e88008841741d3d320a17490e5d9624", "text": "In this project, the task of architecture classification for monuments and buildings from the Indian subcontinent was explored. Five major classes of architecture were taken and various supervised learning methods, both probabilistic and nonprobabilistic, were experimented with in order to classify the monuments into one of the five categories. The categories were: ’Ancient’, ’British’, ’IndoIslamic’, ’Maratha’ and ’Sikh’. Local ORB feature descriptors were used to represent each image and clustering was applied to quantize the obtained features to a smaller size. Other than the typical method of using features to do an image-wise classification, another method where descriptor wise classification is done was also explored. In this method, image label was provided as the mode of the labels of the descriptors of that image. It was found that among the different classifiers, k nearest neighbors for the case of descriptor-wise classification performed the best.", "title": "" }, { "docid": "c5395e3677eaba87cc568e260e26fe2c", "text": "Inverse Reinforcement Learning (IRL) deals with the problem of recovering the reward function optimized by an expert given a set of demonstrations of the expert’s policy. Most IRL algorithms need to repeatedly compute the optimal policy for different reward functions. This paper proposes a new IRL approach that allows to recover the reward function without the need of solving any “direct” RL problem. The idea is to find the reward function that minimizes the gradient of a parameterized representation of the expert’s policy. In particular, when the reward function can be represented as a linear combination of some basis functions, we will show that the aforementioned optimization problem can be efficiently solved. We present an empirical evaluation of the proposed approach on a multidimensional version of the Linear-Quadratic Regulator (LQR) both in the case where the parameters of the expert’s policy are known and in the (more realistic) case where the parameters of the expert’s policy need to be inferred from the expert’s demonstrations. Finally, the algorithm is compared against the state-of-the-art on the mountain car domain, where the expert’s policy is unknown.", "title": "" }, { "docid": "d03c86de8e62ae7396ab70c0fee2384b", "text": "Browsers and their users can be tracked even in the absence of a persistent IP address or cookie. Unique and hence identifying pieces of information, making up what is known as a fingerprint, can be collected from browsers by a visited website, e.g. using JavaScript. However, browsers vary in precisely what information they make available, and hence their fingerprintability may also vary. In this paper, we report on the results of experiments examining the fingerprintable attributes made available by a range of modern browsers. We tested the most widely used browsers for both desktop and mobile platforms. The results reveal significant differences between browsers in terms of their fingerprinting potential, meaning that the choice of browser has significant privacy implications.", "title": "" }, { "docid": "b50918f904d08f678cb153b16b052344", "text": "According to Earnshaw's theorem, the ratio between axial and radial stiffness is always -2 for pure permanent magnetic configurations with rotational symmetry. Using highly permeable material increases the force and stiffness of permanent magnetic bearings. However, the stiffness in the unstable direction increases more than the stiffness in the stable direction. This paper presents an analytical approach to calculating the axial force and the axial and radial stiffnesses of attractive passive magnetic bearings (PMBs) with back iron. The investigations are based on the method of image charges and show in which magnet geometries lead to reasonable axial to radial stiffness ratios. Furthermore, the magnet dimensions achieving maximum force and stiffness per magnet volume are outlined. Finally, the calculation method was applied to the PMB of a magnetically levitated fan, and the analytical results were compared with a finite element analysis.", "title": "" }, { "docid": "9b123e0cf32118094b803323d1073b99", "text": "The lack of sufficient labeled Web pages in many languages, especially for those uncommonly used ones, presents a great challenge to traditional supervised classification methods to achieve satisfactory Web page classification performance. To address this, we propose a novel Nonnegative Matrix Tri-factorization (NMTF) based Dual Knowledge Transfer (DKT) approach for cross-language Web page classification, which is based on the following two important observations. First, we observe that Web pages for a same topic from different languages usually share some common semantic patterns, though in different representation forms. Second, we also observe that the associations between word clusters and Web page classes are a more reliable carrier than raw words to transfer knowledge across languages. With these recognitions, we attempt to transfer knowledge from the auxiliary language, in which abundant labeled Web pages are available, to target languages, in which we want classify Web pages, through two different paths: word cluster approximations and the associations between word clusters and Web page classes. Due to the reinforcement between these two different knowledge transfer paths, our approach can achieve better classification accuracy. We evaluate the proposed approach in extensive experiments using a real world cross-language Web page data set. Promising results demonstrate the effectiveness of our approach that is consistent with our theoretical analyses.", "title": "" }, { "docid": "9d4b97f66055979079940b267257758f", "text": "A model that predicts the static friction for elastic-plastic contact of rough surface presented. The model incorporates the results of accurate finite element analyses elastic-plastic contact, adhesion and sliding inception of a single asperity in a statis representation of surface roughness. The model shows strong effect of the externa and nominal contact area on the static friction coefficient in contrast to the classical of friction. It also shows that the main dimensionless parameters affecting the s friction coefficient are the plasticity index and adhesion parameter. The effect of adh on the static friction is discussed and found to be negligible at plasticity index va larger than 2. It is shown that the classical laws of friction are a limiting case of present more general solution and are adequate only for high plasticity index and n gible adhesion. Some potential limitations of the present model are also discussed ing to possible improvements. A comparison of the present results with those obt from an approximate CEB friction model shows substantial differences, with the l severely underestimating the static friction coefficient. @DOI: 10.1115/1.1609488 #", "title": "" }, { "docid": "2a827ddb30be8cdc3ecaf09da2e898de", "text": "There is an increasing interest on accelerating neural networks for real-time applications. We study the studentteacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The proposed method is particularly effective for relatively small student networks. Moreover, experimental results show the effect of network size when the modern networks are used as student. We empirically study the trade-off between inference time and classification accuracy, and provide suggestions on choosing a proper student network.", "title": "" }, { "docid": "98e7313ee26e70447b9366ff14b74605", "text": "We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.", "title": "" }, { "docid": "f829097794802117bf37ea8ce891611a", "text": "Manually crafted combinatorial features have been the \"secret sauce\" behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge.", "title": "" }, { "docid": "d642e6cc5de4dc194c6b2d7d0cf17d18", "text": "The purpose of regression testing is to ensure that bug xes and new functionality introduced in a new version of a software do not adversely a ect the correct functionality inherited from the previous version. This paper explores e cient methods of selecting small subsets of regression test sets that may be used to es-", "title": "" }, { "docid": "f72d72975b1c16ee3d0c0ec1826301e3", "text": "Motion layer estimation has recently emerged as a promising object tracking method. In this paper, we extend previous research on layer-based tracker by introducing the concept of background occluding layers and explicitly inferring depth ordering of foreground layers. The background occluding layers lie in front of, behind, and in between foreground layers. Each pixel in the background regions belongs to one of these layers and occludes all the foreground layers behind it. Together with the foreground ordering, the complete information necessary for reliably tracking objects through occlusion is included in our representation. An MAP estimation framework is developed to simultaneously update the motion layer parameters, the ordering parameters, and the background occluding layers. Experimental results show that under various conditions with occlusion, including situations with moving objects undergoing complex motions or having complex interactions, our tracking algorithm is able to handle many difficult tracking tasks reliably.", "title": "" }, { "docid": "e0160911f70fa836f64c08f721f6409e", "text": "Today’s openly available knowledge bases, such as DBpedia, Yago, Wikidata or Freebase, capture billions of facts about the world’s entities. However, even the largest among these (i) are still limited in up-to-date coverage of what happens in the real world, and (ii) miss out on many relevant predicates that precisely capture the wide variety of relationships among entities. To overcome both of these limitations, we propose a novel approach to build on-the-fly knowledge bases in a query-driven manner. Our system, called QKBfly, supports analysts and journalists as well as question answering on emerging topics, by dynamically acquiring relevant facts as timely and comprehensively as possible. QKBfly is based on a semantic-graph representation of sentences, by which we perform three key IE tasks, namely named-entity disambiguation, co-reference resolution and relation extraction, in a light-weight and integrated manner. In contrast to Open IE, our output is canonicalized. In contrast to traditional IE, we capture more predicates, including ternary and higher-arity ones. Our experiments demonstrate that QKBfly can build high-quality, on-the-fly knowledge bases that can readily be deployed, e.g., for the task of ad-hoc question answering. PVLDB Reference Format: D. B. Nguyen, A. Abujabal, N. K. Tran, M. Theobald, and G. Weikum. Query-Driven On-The-Fly Knowledge Base Construction. PVLDB, 11 (1): 66-7 , 2017. DOI: 10.14778/3136610.3136616", "title": "" }, { "docid": "d9c9dde3f5e3bf280f09d6783a573357", "text": "We present a detection method that is able to detect a learned target and is valid for both static and moving cameras. As an application, we detect pedestrians, but could be anything if there is a large set of images of it. The data set is fed into a number of deep convolutional networks, and then, two of these models are set in cascade in order to filter the cutouts of a multi-resolution window that scans the frames in a video sequence. We demonstrate that the excellent performance of deep convolutional networks is very difficult to match when dealing with real problems, and yet we obtain competitive results.", "title": "" } ]
scidocsrr
34da3813c06a9df6767de8e03e647df0
Measuring online social bubbles
[ { "docid": "cca664cf201c79508a266a34646dba01", "text": "Scholars have argued that online social networks and personalized web search increase ideological segregation. We investigate the impact of these potentially polarizing channels on news consumption by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that individuals indeed exhibit substantially higher segregation when reading articles shared on social networks or returned by search engines, a pattern driven by opinion pieces. However, these polarizing articles from social media and web search constitute only 2% of news consumption. Consequently, while recent technological changes do increase ideological segregation, the magnitude of the effect is limited. JEL: D83, L86, L82", "title": "" } ]
[ { "docid": "5dde1310a2fbe12fcef11e2d120eafdb", "text": "A flotation pre-treatment study for the separation of enargite (Cu3AsS4) from chalcopyrite (CuFeS2) ores of different origins was investigated in this work. The copper ore bearing enargite mineral contained 5.87mass% As and 16.50mass% Cu while the chalcopyrite bearing ore contained 0.32mass% As and 21.63mass% Cu. The two ore samples were mixed at 7 : 3 (enargite : chalcopyrite) by weight ratio to prepare a mixed ore sample with As content at 3.16 and 18.25mass% Cu for the flotation study. Effect of particle size, slurry pH, flotation time, collector type, collector addition or dosage and depressants were investigated to evaluate efficiency of enargite separation from chalcopyrite and recovery of both minerals as separate concentrates. For enargite single ore flotation, the 38­75μm size fraction showed that over 98% of enargite was selectively recovered within 5min at slurry pH of 4 and As content in the final tailings was reduced to 0.22mass%. In mix ore (enargite + chalcopyrite) flotation, 97% of enargite was first removed at pH 4 followed by chalcopyrite flotation at pH 8, and over 95% recovery was achieved in 15min flotation time. The As content in the final tailings was reduced to 0.1mass%. [doi:10.2320/matertrans.M2011354]", "title": "" }, { "docid": "377aec61877995ad2b677160fa43fefb", "text": "One of the major issues involved with communication is acoustic echo, which is actually a delayed version of sound reflected back to the source of sound hampering communication. Cancellation of these involve the use of acoustic echo cancellers involving adaptive filters governed by adaptive algorithms. This paper presents a review of some of the algorithms of acoustic echo cancellation covering their merits and demerits. Various algorithms like LMS, NLMS, FLMS, LLMS, RLS, AFA, LMF have been discussed. Keywords— Adaptive Filter, Acoustic Echo, LMS, NLMS, FX-LMS, AAF, LLMS, RLS.", "title": "" }, { "docid": "b0a1401136b75cfae05e7a8b31a0331c", "text": "Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR), which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of deep neural networks (DNNs) as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output. In this paper, we introduce a new type of adversarial examples based on psychoacoustic hiding. Our attack exploits the characteristics of DNN-based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize the perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state-of-the-art speech recognition system Kaldi and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten-second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.", "title": "" }, { "docid": "07cb8967d6d347cbc8dd0645e5c1f4b0", "text": "Obtaining reliable data describing local poverty metrics at a granularity that is informative to policy-makers requires expensive and logistically difficult surveys, particularly in the developing world. Not surprisingly, the poverty stricken regions are also the ones which have a high probability of being a war zone, have poor infrastructure and sometimes have governments that do not cooperate with internationally funded development efforts. We train a CNN on free and publicly available daytime satellite images of the African continent from Landsat 7 to build a model for predicting local economic livelihoods. Only 5% of the satellite images can be associated with labels (which are obtained from DHS Surveys) and thus a semi-supervised approach using a GAN [33], albeit with a more stable-totrain flavor of GANs called the Wasserstein GAN regularized with gradient penalty [15] is used. The method of multitask learning is employed to regularize the network and also create an end-to-end model for the prediction of multiple poverty metrics.", "title": "" }, { "docid": "4f61e9cd234a5f6e6b9886cf4ab1cc22", "text": "We introduce a data-driven hair capture framework based on example strands generated through hair simulation. Our method can robustly reconstruct faithful 3D hair models from unprocessed input point clouds with large amounts of outliers. Current state-of-the-art techniques use geometrically-inspired heuristics to derive global hair strand structures, which can yield implausible hair strands for hairstyles involving large occlusions, multiple layers, or wisps of varying lengths. We address this problem using a voting-based fitting algorithm to discover structurally plausible configurations among the locally grown hair segments from a database of simulated examples. To generate these examples, we exhaustively sample the simulation configurations within the feasible parameter space constrained by the current input hairstyle. The number of necessary simulations can be further reduced by leveraging symmetry and constrained initial conditions. The final hairstyle can then be structurally represented by a limited number of examples. To handle constrained hairstyles such as a ponytail of which realistic simulations are more difficult, we allow the user to sketch a few strokes to generate strand examples through an intuitive interface. Our approach focuses on robustness and generality. Since our method is structurally plausible by construction, we ensure an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.", "title": "" }, { "docid": "2caf31154811099e68644c3e3e7e1792", "text": "In this paper, we study the effective semi-supervised hashing method under the framework of regularized learning-based hashing. A nonlinear hash function is introduced to capture the underlying relationship among data points. Thus, the dimensionality of the matrix for computation is not only independent from the dimensionality of the original data space but also much smaller than the one using linear hash function. To effectively deal with the error accumulated during converting the real-value embeddings into the binary code after relaxation, we propose a semi-supervised nonlinear hashing algorithm using bootstrap sequential projection learning which effectively corrects the errors by taking into account of all the previous learned bits holistically without incurring the extra computational overhead. Experimental results on the six benchmark data sets demonstrate that the presented method outperforms the state-of-the-art hashing algorithms at a large margin.", "title": "" }, { "docid": "1dad20d7f19e20945e9ad28aa5a70d93", "text": "Article history: Received 3 January 2016 Received in revised form 9 June 2017 Accepted 26 September 2017 Available online 16 October 2017", "title": "" }, { "docid": "efaec92cf49a0bf2a48f9c50742d0199", "text": "This paper presents an integrated inverted stripline-like tunable transmission line structure where the propagation velocity can be modified as the characteristic impedance remains constant. As one application of this structure, a mm-wave phase shifter for massive hybrid MIMO applications is implemented in a 45 nm CMOS SOI process. Measurement results at 45 GHz of this phase shifter demonstrate a 79° phase shift tuning range, worst-case insertion loss of 3.3 dB, and effective area of 0.072 mm2. Compared to an on-chip reference phase shifter implemented based on a previously-reported tunable transmission line structure, this work achieves 35% less area occupied and 1.0 dB less insertion loss, while maintaining approximately the same phase shift tuning range.", "title": "" }, { "docid": "d3c4c641f46800c15c0995ce9e1943f7", "text": "We present a computationally e cient architecture for image super-resolution that achieves state-of-the-art results on images with large spatial extend. Apart from utilizing Convolutional Neural Networks, our approach leverages recent advances in fast approximate inference for sparse coding. We empirically show that upsampling methods work much better on latent representations than in the original spatial domain. Our experiments indicate that the proposed architecture can serve as a basis for additional future improvements in image superresolution.", "title": "" }, { "docid": "aa4e3c2db7f1a1ac749d5d34014e26a0", "text": "In this paper, a novel text clustering technique is proposed to summarize text documents. The clustering method, so called ‘Ensemble Clustering Method’, combines both genetic algorithms (GA) and particle swarm optimization (PSO) efficiently and automatically to get the best clustering results. The summarization with this clustering method is to effectively avoid the redundancy in the summarized document and to show the good summarizing results, extracting the most significant and non-redundant sentence from clustering sentences of a document. We tested this technique with various text documents in the open benchmark datasets, DUC01 and DUC02. To evaluate the performances, we used F-measure and ROUGE. The experimental results show that the performance capability of our method is about 11% to 24% better than other summarization algorithms. Key-Words: Text Summarization; Extractive Summarization; Ensemble Clustering; Genetic Algorithms; Particle Swarm Optimization", "title": "" }, { "docid": "408ab4c5138ee61f2602dea7907846d1", "text": "A new mirror mounting technique applicable to the primary mirror in a space telescope is presented. This mounting technique replaces conventional bipod flexures with flexures having mechanical shims so that adjustments can be made to counter the effects of gravitational distortion of the mirror surface while being tested in the horizontal position. Astigmatic aberration due to the gravitational changes is effectively reduced by adjusting the shim thickness, and the relation between the astigmatism and the shim thickness is investigated. We tested the mirror interferometrically at the center of curvature using a null lens. Then we repeated the test after rotating the mirror about its optical axis by 180° in the horizontal setup, and searched for the minimum system error. With the proposed flexure mount, the gravitational stress at the adhesive coupling between the mirror and the mount is reduced by half that of a conventional bipod flexure for better mechanical safety under launch loads. Analytical results using finite element methods are compared with experimental results from the optical interferometer. Vibration tests verified the mechanical safety and optical stability, and qualified their use in space applications.", "title": "" }, { "docid": "4736ae77defc37f96b235b3c0c2e56ff", "text": "This review highlights progress over the past decade in research on the effects of mass trauma experiences on children and youth, focusing on natural disasters, war, and terrorism. Conceptual advances are reviewed in terms of prevailing risk and resilience frameworks that guide basic and translational research. Recent evidence on common components of these models is evaluated, including dose effects, mediators and moderators, and the individual or contextual differences that predict risk or resilience. New research horizons with profound implications for health and well-being are discussed, particularly in relation to plausible models for biological embedding of extreme stress. Strong consistencies are noted in this literature, suggesting guidelines for disaster preparedness and response. At the same time, there is a notable shortage of evidence on effective interventions for child and youth victims. Practical and theory-informative research on strategies to protect children and youth victims and promote their resilience is a global priority.", "title": "" }, { "docid": "09e4e90323d4105804cce27293a214f3", "text": "BACKGROUND\nPsoriasis is a common skin disease that can also involve the nails. All parts of the nail and surrounding structures can become affected. The incidence of nail involvement increases with duration of psoriasis. Although it is difficult to treat psoriatic nails, the condition may respond to therapy.\n\n\nOBJECTIVES\nTo assess evidence for the efficacy and safety of the treatments for nail psoriasis.\n\n\nSEARCH METHODS\nWe searched the following databases up to March 2012: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 1946), EMBASE (from 1974), and LILACS (from 1982). We also searched trials databases and checked the reference lists of retrieved studies for further references to relevant randomised controlled trials (RCTs).\n\n\nSELECTION CRITERIA\nAll RCTs of any design concerning interventions for nail psoriasis.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors independently assessed trial risk of bias and extracted the data. We collected adverse effects from the included studies.  \n\n\nMAIN RESULTS\nWe included 18 studies involving 1266 participants. We were not able to pool due to the heterogeneity of many of the studies.Our primary outcomes were 'Global improvement of nail psoriasis as rated by a clinician', 'Improvement of nail psoriasis scores (NAS, NAPSI)', 'Improvement of nail psoriasis in the participant's opinion'. Our secondary outcomes were 'Adverse effects (and serious adverse effects)'; 'Effects on quality of life'; and 'Improvement in nail features, pain score, nail thickness, thickness of subungual hyperkeratosis, number of affected nails, and nail growth'. We assessed short-term (3 to 6 months), medium-term (6 to 12 months), and long-term (> 12 months) treatments separately if possible.Two systemic biologic studies and three radiotherapy studies reported significant results for our first two primary outcomes. Infliximab 5 mg/kg showed 57.2% nail score improvement versus -4.1% for placebo (P < 0.001); golimumab 50 mg and 100 mg showed 33% and 54% improvement, respectively, versus 0% for placebo (P < 0.001), both after medium-term treatment. Infliximab and golimumab also showed significant results after short-term treatment. From the 3 radiotherapy studies, only the superficial radiotherapy (SRT) study showed 20% versus 0% nail score improvement (P = 0.03) after short-term treatment.Studies with ciclosporin, methotrexate, and ustekinumab were not significantly better than their respective comparators: etretinate, ciclosporin, and placebo. Nor were studies with topical interventions (5-fluorouracil 1% in Belanyx® lotion, tazarotene 0.1% cream, calcipotriol 50 ug/g, calcipotriol 0.005%) better than their respective comparators: Belanyx® lotion, clobetasol propionate, betamethasone dipropionate with salicylic acid, or betamethasone dipropionate.Of our secondary outcomes, not all included studies reported adverse events; those that did only reported mild adverse effects, and there were more in studies with systemic interventions. Only one study reported the effect on quality of life, and two studies reported nail improvement only per feature.\n\n\nAUTHORS' CONCLUSIONS\nInfliximab, golimumab, SRT, grenz rays, and electron beam caused significant nail improvement compared to the comparative treatment. Although the quality of trials was generally poor, this review may have some implications for clinical practice.Although powerful systemic treatments have been shown to be beneficial, they may have serious adverse effects. So they are not a realistic option for people troubled with nail psoriasis, unless the patient is prescribed these systemic treatments because of cutaneous psoriasis or psoriatic arthritis or the nail psoriasis is severe, refractory to other treatments, or has a major impact on the person's quality of life. Because of their design and timescale, RCTs generally do not pick up serious side-effects. This review reported only mild adverse effects, recorded mainly for systemic treatments. Radiotherapy for psoriasis is not used in common practice. The evidence for the use of topical treatments is inconclusive and of poor quality; however, this does not imply that they do not work.Future trials need to be rigorous in design, with adequate reporting. Trials should correctly describe the participants' characteristics and diagnostic features, use standard validated nail scores and participant-reported outcomes, be long enough to report efficacy and safety, and include details of effects on nail features.", "title": "" }, { "docid": "8411019e166f3b193905099721c29945", "text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.", "title": "" }, { "docid": "da237e14a3a9f6552fc520812073ee6c", "text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.", "title": "" }, { "docid": "9f4ed0a381bec3c334ec15dec27a8a24", "text": "Software code review, i.e., the practice of having other team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that formal code inspections tend to improve the quality of delivered software. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process, little is known about the relationship between modern code review practices and long-term software quality. Hence, in this paper, we study the relationship between post-release defects (a popular proxy for long-term software quality) and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, (2) code review participation, i.e., the degree of reviewer involvement in the code review process, and (3) code reviewer expertise, i.e., the level of domain-specific expertise of the code reviewers. Through a case study of the Qt, VTK, and ITK projects, we find that code review coverage, participation, and expertise share a significant link with software quality. Hence, our results empirically confirm the intuition that poorly-reviewed code has a negative impact on software quality in large systems using modern reviewing tools.", "title": "" }, { "docid": "de6e82fa1fb64da0c57650c80cf56a04", "text": "Gait recognition has been considered as a new promising approach for biometric-based authentication. Gait signals are commonly obtained by collecting motion data from inertial sensors (accelerometers, gyroscopes) integrated in mobile and wearable devices. Motion data is subsequently transformed to a feature space for recognition procedure. One fashionable, effective way to extract features automatically is using conventional Convolutional Neural Networks (CNN) as feature extractors. In this paper, we propose DeepSense-Inception (DSI), a new method inspired from DeepSense, to recognize users from their gait features using Inception-like modules for better feature extraction than conventional CNN. Experiments for user identification on UCI Human Activity Recognition dataset demonstrate that our method not only achieves an accuracy of 99.9%, higher than that of DeepSense (99.7%), but also uses only 149K parameters, less than one third of the parameters in DeepSense (529K parameters). Thus, our method can be implemented more efficiently in limited resource systems.", "title": "" }, { "docid": "e627c7ee8fd9a8a3ea8c7dc0a4fb91ce", "text": "The goal of a fall detection system is to automatically detect cases where a human falls and may have been injured. A natural application of such a system is in home monitoring of patients and elderly persons, so as to automatically alert relatives and/or authorities in case of an injury caused by a fall. This paper describes experiments with three computer vision methods for fall detection in a simulated home environment. The first method makes a decision based on a single frame, simply based on the vertical position of the image centroid of the person. The second method makes a threshold-based decision based on the last few frames, by considering the number of frames during which the person has been falling, the magnitude (in pixels) of the fall, and the maximum velocity of the fall. The third method is a statistical method that makes a decision based on the same features as the previous two methods, but using probabilistic models as opposed to thresholds for making the decision. Preliminary experimental results are promising, with the statistical method attaining relatively high accuracy in detecting falls while at the same time producing a relatively small number of false positives.", "title": "" }, { "docid": "de4e2e131a0ceaa47934f4e9209b1cdd", "text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.", "title": "" }, { "docid": "b9ef363fc7563dd14b3a4fd781d76d91", "text": "Deep learning (DL)-based Reynolds stress with its capability to leverage values of large data can be used to close Reynolds-averaged Navier-Stoke (RANS) equations. Type I and Type II machine learning (ML) frameworks are studied to investigate data and flow feature requirements while training DL-based Reynolds stress. The paper presents a method, flow features coverage mapping (FFCM), to quantify the physics coverage of DL-based closures that can be used to examine the sufficiency of training data points as well as input flow features for data-driven turbulence models. Three case studies are formulated to demonstrate the properties of Type I and Type II ML. The first case indicates that errors of RANS equations with DL-based Reynolds stress by Type I ML are accumulated along with the simulation time when training data do not sufficiently cover transient details. The second case uses Type I ML to show that DL can figure out time history of flow transients from data sampled at various times. The case study also shows that the necessary and sufficient flow features of DL-based closures are first-order spatial derivatives of velocity fields. The last case demonstrates the limitation of Type II ML for unsteady flow simulation. Type II ML requires initial conditions to be sufficiently close to reference data. Then reference data can be used to improve RANS simulation.", "title": "" } ]
scidocsrr
6270f78fcd37333aa41d22c4afadfde1
Online Reconstruction of Structural Information from Datacenter Logs
[ { "docid": "fa8e732d89f22704167be5f51f75ecb6", "text": "By studying trouble tickets from small enterprise networks, we conclude that their operators need detailed fault diagnosis. That is, the diagnostic system should be able to diagnose not only generic faults (e.g., performance-related) but also application specific faults (e.g., error codes). It should also identify culprits at a fine granularity such as a process or firewall configuration. We build a system, called NetMedic, that enables detailed diagnosis by harnessing the rich information exposed by modern operating systems and applications. It formulates detailed diagnosis as an inference problem that more faithfully captures the behaviors and interactions of fine-grained network components such as processes. The primary challenge in solving this problem is inferring when a component might be impacting another. Our solution is based on an intuitive technique that uses the joint behavior of two components in the past to estimate the likelihood of them impacting one another in the present. We find that our deployed prototype is effective at diagnosing faults that we inject in a live environment. The faulty component is correctly identified as the most likely culprit in 80% of the cases and is almost always in the list of top five culprits.", "title": "" }, { "docid": "18aeabe12c3f890b5aa6d5b1f6ded386", "text": "Many stream-based applications have sophisticated data processing requirements and real-time performance expectations that need to be met under high-volume, time-varying data streams. In order to address these challenges, we propose novel operator scheduling approaches that specify (1) which operators to schedule (2) in which order to schedule the operators, and (3) how many tuples to process at each execution step. We study our approaches in the context of the Aurora data stream manager. We argue that a fine-grained scheduling approach in combination with various scheduling techniques (such as batching of operators and tuples) can significantly improve system efficiency by reducing various system overheads. We also discuss application-aware extensions that make scheduling decisions according to per-application Quality of Service (QoS) specifications. Finally, we present prototype-based experimental results that characterize the efficiency and effectiveness of our approaches under various stream workloads and processing scenarios.", "title": "" } ]
[ { "docid": "141b333f0c7b256be45c478a79e8f8eb", "text": "Communications regulators over the next decade will spend increasing time on conflicts between the private interests of broadband providers and the public’s interest in a competitive innovation environment centered on the Internet. As the policy questions this conflict raises are basic to communications policy, they are likely to reappear in many different forms. So far, the first major appearance has come in the ‘‘open access’’ (or ‘‘multiple access’’) debate, over the desirability of allowing vertical integration between Internet Service Providers and cable operators. Proponents of open access see it as a structural remedy to guard against an erosion of the ‘‘neutrality’’ of the network as between competing content and applications. Critics, meanwhile, have taken open-access regulation as unnecessary and likely to slow the pace of broadband deployment.", "title": "" }, { "docid": "d202c0bcf5c3bd568da5232a5c5142b3", "text": "In this paper, we revisit author identification research by conducting a new kind of large-scale reproducibility study: we select 15 of the most influential papers for author identification and recruit a group of students to reimplement them from scratch. Since no open source implementations have been released for the selected papers to date, our public release will have a significant impact on researchers entering the field. This way, we lay the groundwork for integrating author identification with information retrieval to eventually scale the former to the web. Furthermore, we assess the reproducibility of all reimplemented papers in detail, and conduct the first comparative evaluation of all approaches on three", "title": "" }, { "docid": "e87de50ea9d62225018db677e1591bd5", "text": "The relationship between culture, language, and thought has long been one of the most important topics for those who wish to understand the nature of human cognition. This issue has been investigated for decades across a broad range of research disciplines. However, there has been scant communication across these different disciplines, a situation largely arising through differences in research interests and discrepancies in the definitions of key terms such as 'culture,' 'language,' and 'thought.' This article reviews recent trends in research on the relation between language, culture and thought to capture how cognitive psychology and cultural psychology have defined 'language' and 'culture,' and how this issue was addressed within each research discipline. We then review recent research conducted in interdisciplinary perspectives, which directly compared the roles of culture and language. Finally, we highlight the importance of considering the complex interplay between culture and language to provide a comprehensive picture of how language and culture affect thought.", "title": "" }, { "docid": "db7a4ab8d233119806e7edf2a34fffd1", "text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.", "title": "" }, { "docid": "907e911258dc3723b58aaff2af1e0514", "text": "Multimodal fashion chatbot provides a natural and informative way to fulfill customers' fashion needs. However, making it 'smart' in generating substantive responses remains a challenging problem. In this paper, we present a multimodal domain knowledge enriched fashion chatbot. It forms a taxonomy-based learning module to capture the fine-grained semantics in images and leverages an end-to-end neural conversational model to generate responses based on the conversation history, visual semantics, and domain knowledge. To avoid inconsistent dialogues, deep reinforcement learning method is used to further optimize the model.", "title": "" }, { "docid": "e0fe5ab372bd6d4e39dfc6974832da34", "text": "Purpose – The purpose of this paper is to determine the factors that influence the intention to use and actual usage of a G2B system such as electronic procurement system (EPS) by various ministries in the Government of Malaysia. Design/methodology/approach – The research uses an extension of DeLone and McLean’s model of IS success by including trust, facilitating conditions, and web design quality. The model is tested using an empirical approach. A questionnaire was designed and responses from 358 users from various ministries were collected and analyzed using structural equation modeling (SEM). Findings – The findings of the study indicate that: perceived usefulness, perceived ease of use, assurance of service by service providers, responsiveness of service providers, facilitating conditions, web design (service quality) are strongly linked to intention to use EPS; and intention to use is strongly linked to actual usage behavior. Practical implications – Typically, governments of developing countries spend millions of dollars to implement e-government systems. The investments can be considered useful only if the usage rate is high. The study can help ICT decision makers in government to recognize the critical factors that are responsible for the success of a G2B system like EPS. Originality/value – The model used in the study is one of the few models designed to determine factors influencing intention to use and actual usage behavior in a G2B system in a fast-developing country like Malaysia.", "title": "" }, { "docid": "fa9d304e6f3ff83818f87d3e69401e5c", "text": "Neurotransmitter receptor trafficking during synaptic plasticity requires the concerted action of multiple signaling pathways and the protein transport machinery. However, little is known about the contribution of lipid metabolism during these processes. In this paper, we addressed the question of the role of cholesterol in synaptic changes during long-term potentiation (LTP). We found that N-methyl-d-aspartate-type glutamate receptor (NMDAR) activation during LTP induction leads to a rapid and sustained loss or redistribution of intracellular cholesterol in the neuron. A reduction in cholesterol, in turn, leads to the activation of Cdc42 and the mobilization of GluA1-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid-type glutamate receptors (AMPARs) from Rab11-recycling endosomes into the synaptic membrane, leading to synaptic potentiation. This process is accompanied by an increase of NMDAR function and an enhancement of LTP. These results imply that cholesterol acts as a sensor of NMDAR activation and as a trigger of downstream signaling to engage small GTPase (guanosine triphosphatase) activation and AMPAR synaptic delivery during LTP.", "title": "" }, { "docid": "8dbe1044f817afd0b485e8a148f3f635", "text": "Acknowledgements i ACKNOWLEDGEMENTS This master thesis was written at the division of Industrial Marketing and E-commerce at Luleå University of Technology. The time frame for this thesis was ten weeks within which I have gained increased knowledge on the concept of customer retention within professional service firms but foremost, I have gained increased knowledge on how to conduct a thesis on my own. First and foremost, I would like to thank my supervisor and mentor professor Manucher Farhang for his guidance, patience and support during this ten weeks timeframe. I would also like to thank Lowe Brindfors in Stockholm, Favör Reklambyrå in Luleå, Holy Diver in Stockholm and Vinter Reklambyrå in Luleå for taking time to participate in this study and for providing valuable information about their agencies. Last but not least, I would like to thank my family for their understanding of my absence and to thank my closest friends for their never ending support and encouragements. Abstract ii ABSTRACT In recent years, customer retention has gained increased value among both goods and service providing firms. However although extensive research exist on the concept of customer retention and its measures and instruments, studies and research on how professional service firms retain their customers remain limited. Hence, in this thesis how professional service providers retain their targeted customers over time will be investigated through four case studies in the professional service industry, more specifically in the advertising sector. The empirical data was collected through interviews with four Swedish advertising agencies: Lowe Brindfors, Favör Reklambyrå, Holy Diver and Vinter Reklambyrå. The findings of this study first and foremost indicate that professional service providers do not have any formal nor standardized procedure which they follow when it comes to retaining their customers. The strategies employed by the firms are highly customized to each individual customer. Further the findings from this study indicate that in order to retain customers over time professional service providers need to place more efforts on the creation of personal relationships with the clients, as it is a strong bond tying customers to the firm. The findings further imply that the creation of customer satisfaction and the creation of switching barriers are the main strategies employed by firms, for retaining customers. Other factors affecting professional service firms \" retention strategies are the firms \" ability to convey confidence, to get the customers involved, and to be able to deliver …", "title": "" }, { "docid": "4653e7adee2817c93bf726566427b62d", "text": "The extraction of meaningful features from videos is important as they can be used in various applications. Despite its importance, video representation learning has not been studied much, because it is challenging to deal with both content and motion information. We present a Mutual Suppression network (MSnet) to learn disentangled motion and content features in videos. The MSnet is trained in such way that content features do not contain motion information and motion features do not contain content information; this is done by suppressing each other with adversarial training. We utilize the disentangled features from the MSnet for several tasks, such as frame reproduction, pixel-level video frame prediction, and dense optical flow estimation, to demonstrate the strength of MSnet. The proposed model outperforms the state-of-the-art methods in pixel-level video frame prediction. The source code will be publicly available.", "title": "" }, { "docid": "1cc6b82cafabcf41477c460020cbfcec", "text": "The movements of ideas and content between locations and languages are unquestionably crucial concerns to researchers of the information age, and Twitter has emerged as a central, global platform on which hundreds of millions of people share knowledge and information. A variety of research has attempted to harvest locational and linguistic metadata from tweets in order to understand important questions related to the 300 million tweets that flow through the platform each day. However, much of this work is carried out with only limited understandings of how best to work with the spatial and linguistic contexts in which the information was produced. Furthermore, standard, well-accepted practices have yet to emerge. As such, this paper studies the reliability of key methods used to determine language and location of content in Twitter. It compares three automated language identification packages to Twitter’s user interface language setting and to a human coding of languages in order to identify common sources of disagreement. The paper also demonstrates that in many cases user-entered profile locations differ from the physical locations users are actually tweeting from. As such, these open-ended, user-generated, profile locations cannot be used as useful proxies for the physical locations from which information is published to Twitter.", "title": "" }, { "docid": "ddaf11dd14952ca864d386a84a0b0f9d", "text": "Bone loss around femoral hip stems is one of the problems threatening the long-term fixation of uncemented stems. Many believe that this phenomenon is caused by reduced stresses in the bone (stress shielding). In the present study the mechanical consequences of different femoral stem materials were investigated using adaptive bone remodeling theory in combination with the finite element method. Bone-remodeling in the femur around the implant and interface stresses between bone and implant were investigated for fully bonded femoral stems. Cemented stems (cobalt-chrome or titanium alloy) caused less bone resorption and lower interface stresses than uncemented stems made from the same materials. The range of the bone resorption predicted in the simulation models was from 23% in the proximal medial cortex surrounding the cemented titanium alloy stem to 76% in the proximal medial cortex around the uncemented cobalt-chrome stem. Very little bone resorption was predicted around a flexible, uncemented \"iso-elastic\" stem, but the proximal interface stresses increased drastically relative to the stiffer uncemented stems composed of cobalt-chrome or titanium alloy. However, the proximal interface stress peak was reduced and shifted during the adaptive remodeling process. The latter was found particularly in the stiffer uncemented cobalt-chrome-molybdenum implant and less for the flexible iso-elastic implant.", "title": "" }, { "docid": "4bc74a746ef958a50bb8c542aa25860f", "text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.", "title": "" }, { "docid": "6318c9d0e62f1608c105b114c6395e6f", "text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.", "title": "" }, { "docid": "869846130638f39f4f34a4a613fbb607", "text": "Encoding information into synthetic DNA is a novel approach for data storage. Due to its natural robustness and size in molecular dimensions, it can be used for long-term and very high-density archiving of data. Since the DNA molecules can be corrupted by thermal process and the writing/reading process of DNA molecules can be faulty, it is necessary to encode the data using error-correcting codes. In this thesis, the student analyzes errors that occur in such a storage system and designs coding schemes that can be used for error correction.", "title": "" }, { "docid": "34976e12739060a443ad0cfbb373fd3b", "text": "The detection of failures is a fundamental issue for fault-tolerance in distributed systems. Recently, many people have come to realize that failure detection ought to be provided as some form of generic service, similar to IP address lookup or time synchronization. However, this has not been successful so far; one of the reasons being the fact that classical failure detectors were not designed to satisfy several application requirements simultaneously. We present a novel abstraction, called accrual failure detectors, that emphasizes flexibility and expressiveness and can serve as a basic building block to implementing failure detectors in distributed systems. Instead of providing information of a binary nature (trust vs. suspect), accrual failure detectors output a suspicion level on a continuous scale. The principal merit of this approach is that it favors a nearly complete decoupling between application requirements and the monitoring of the environment. In this paper, we describe an implementation of such an accrual failure detector, that we call the /spl phi/ failure detector. The particularity of the /spl phi/ failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We analyzed the behavior of our /spl phi/ failure detector over an intercontinental communication link over a week. Our experimental results show that if performs equally well as other known adaptive failure detection mechanisms, with an improved flexibility.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "1b05959625fb8b733e9b9ecf3dcef22e", "text": "Relational agents—computational artifacts designed to build and maintain longterm social-emotional relationships with users—may provide an effective interface modality for older adults. This is especially true when the agents use simulated face-toface conversation as the primary communication medium, and for applications in which repeated interactions over long time periods are required, such as in health behavior change. In this article we discuss the design of a relational agent for older adults that plays the role of an exercise advisor, and report on the results of a longitudinal study involving 21 adults aged 62 to 84, half of whom interacted with the agent daily for two months in their homes and half who served as a standard-of-care control. Results indicate the agent was accepted and liked, and was significantly more efficacious at increasing physical activity (daily steps walked) than the control.", "title": "" }, { "docid": "432f93d3fe5538cd4120dc016bc5331c", "text": "An overview of the current state of the art in scanning micromirror technology for switching, imaging, and beam steering applications is presented. The requirements that drive the design and fabrication technology are covered. Electrostatic, electromagnetic, and magnetic actuation techniques are discussed as well as the motivation toward combdrive configurations from parallel plate configurations for large diameter (mm range) scanners. Suitability of surface micromachining, bulk micromachining, and silicon on insulator (SOI) micromachining technology is presented in the context of the length scale and performance for given scanner applications.", "title": "" }, { "docid": "956660129d1710cf1fa28b8c5f5086b1", "text": "Using magnetic field data as fingerprints for localization in indoor environment has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which makes the system unreliable, or impose strong restrictions on smartphone such as fixed phone orientation, which is not practical for real-life use. In this paper, we present an indoor localization system named MaLoc, built on our proposed augmented particle filter. We create several innovations on the motion model, the measurement model and the resampling model to enhance the traditional particle filter. To minimize errors in motion estimation and improve the robustness of particle filter, we augment the particle filter with a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model which combines a new magnetic fingerprinting model and the existing magnitude fingerprinting model to improve the system performance and avoid calibrating different smartphone magnetometers. In addition, we present a novel localization quality estimation method and a localization failure detection method to address the \"Kidnapped Robot Problem\" and improve the overall usability. Our experimental studies show that MaLoc achieves a localization accuracy of 1~2.8m on average in a large building.", "title": "" }, { "docid": "bf998f5d578e4b6412e67c24625d6716", "text": "Bearings play a critical role in maintaining safety and reliability of rotating machinery. Bearings health condition prediction aims to prevent unexpected failures and minimize overall maintenance costs since it provides decision making information for condition-based maintenance. This paper proposes a Deep Belief Network (DBN)-based data-driven health condition prediction method for bearings. In this prediction method, a DBN is used as the predictor, which includes stacked RBMs and regression output. Our main contributions include development of a deep leaning-based data-driven prognosis solution that does not rely on explicit model equations and prognostic expertise, and providing comprehensive prediction results on five representative runto-failure bearings. The IEEE PHM 2012 challenge dataset is used to demonstrate the effectiveness of the proposed method, and the results are compared with two existing methods. The results show that the proposed method has promising performance in terms of short-term health condition prediction and remaining useful life prediction for bearings.", "title": "" } ]
scidocsrr
7210c486f7678bf514d69b715c7bdd13
Supervised and Traditional Term Weighting Methods for Automatic Text Categorization
[ { "docid": "d0dafdd3a949c0a9725ad6037c16f32b", "text": "KNN and SVM are two machine learning approaches to Text Categorization (TC) based on the Vector Space Model. In this model, borrowed from Information Retrieval, documents are represented as a vector where each component is associated with a particular word from the vocabulary. Traditionally, each component value is assigned using the information retrieval TFIDF measure. While this weighting method seems very appropriate for IR, it is not clear that it is the best choice for TC problems. Actually, this weighting method does not leverage the information implicitly contained in the categorization task to represent documents. In this paper, we introduce a new weighting method based on statistical estimation of the importance of a word for a specific categorization problem. This method also has the benefit to make feature selection implicit, since useless features for the categorization problem considered get a very small weight. Extensive experiments reported in the paper shows that this new weighting method improves significantly the classification accuracy as measured on many categorization tasks.", "title": "" } ]
[ { "docid": "ede0e47ee50f11096ce457adea6b4600", "text": "Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of S. Zeadally ( ) Network Systems Laboratory, Department of Computer Science and Information Technology, University of the District of Columbia, 4200, Connecticut Avenue, N.W., Washington, DC 20008, USA e-mail: szeadally@udc.edu R. Hunt Department of Computer Science and Software Engineering, College of Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand e-mail: ray.hunt@canterbury.ac.nz Y.-S. Chen Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San Shia, Taipei County, Taiwan e-mail: yschen@mail.ntpu.edu.tw Y.-S. Chen e-mail: yschen@csie.ntpu.edu.tw Y.-S. Chen e-mail: yschen.iet@gmail.com A. Irwin School of Computer and Information Science, University of South Australia, Room F2-22a, Mawson Lakes, South Australia 5095, Australia e-mail: angela.irwin@unisa.edu.au A. Hassan School of Information Science, Computer and Electrical Engineering, Halmstad University, Kristian IV:s väg 3, 301 18 Halmstad, Sweden e-mail: aamhas06@student.hh.se years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.", "title": "" }, { "docid": "068386a089895bed3a7aebf2d1a7b35d", "text": "The purpose of this prospective study was to assess the efficacy of the Gertzbein classification and the Load Shearing classification in the conservative treatment of thoracolumbar burst spinal fractures. From 1997 to 1999, 30 consecutive patients with single-level thoracolumbar spinal injury with no neurological impairment were classified according to the Gertzbein classification and the Load Shearing scoring, and were treated conservatively. A custom-made thoracolumbosacral orthosis was worn in all patients for 6 months and several radiologic parameters were evaluated, while the Denis Pain and Work Scale were used to assess the clinical outcome. The average follow-up period was 24 months (range 12–39 months). During this period radiograms showed no improvement of any radiologic parameter. However, the clinical outcome was satisfactory in 28 of 30 patients with neither pseudarthrosis, nor any complications recorded on completion of treatment. This study showed that thoracolumbar burst fractures Gertzbein A3 with a load shearing score 6 or less can be successfully treated conservatively. Patient selection is a fundamental component in clinical success for these classification systems. Cette étude a pour objectif de classer les fractures comminutives du segment thoraco-lombaire de la colonne vertébrale qui ont été traitées de manière conservatrice, conformément à la classification de Gertzbein et à la classification de la répartition des contraintes. Depuis 1997 à 1999, trente malades présentant une fracture comminutive dans le segment thoraco-lombaire de la colonne vertébrale, sans dommages neurologiques, ont été traités de manière conservatoire, conformément aux classifications de Gertzbein et à la notation de la répartition des charges. Les patients ont porté une orthèse thoraco-lombaire pendant 6 mois et on a procédé à une évaluation des paramètres radiographiques. L'échelle de la douleur et du travail de Dennis a été utilisée pour évaluer les résultats. La durée moyenne d'observation des malades a été de 24 mois (de 12 à 39 mois). Bien que les paramètres radiologiques, pendant cette période, n'aient manifesté aucune amélioration, le résultat clinique de ces patients a été satisfaisant pour 93.33% d' entre eux. L'on n'a pas constaté de complications ni de pseudarthroses. La classification de Gertzbein associe le type de fracture au degré d'instabilité mécanique et au dommage neurologique. La classification de la répartition des contraintes relie l'écrasement et le déplacement de la fracture à la stabilité mécanique. Les fractures explosives du segment lombaire de la colonne vertébrale de type A3, selon Gertzbein, degré 6 ou inférieur à 6, selon la classification des contraintes, peuvent être traitées avec succès de manière conservatrice. Le choix judicieux des patients est important pour le succès clinique de cette méthode de classification.", "title": "" }, { "docid": "fb80c27ab2615373a316605082adadbb", "text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.", "title": "" }, { "docid": "d29b7f2808cb7abb2a2e49462b9b3039", "text": "A novel low profile circularly polarized antenna using Substrate Integrated Waveguide technology (SIW) for millimeter-wave (MMW) application is proposed. The antenna employs an X-shaped slot excited by a rectangular SIW and backed by circular cavity. The optimized design has an operating frequency range from 34.7 GHZ to 36.1 GHz with a bandwidth of 4.23%. The overall antenna realized gain is around 6.7 dB over the operating band. The simulated results using both HFSS and CSTMWS show a very good agreement between them.", "title": "" }, { "docid": "97cb7718c75b266a086441912e4b22c3", "text": "Introduction Teacher education finds itself in a critical stage. The pressure towards more school-based programs which is visible in many countries is a sign that not only teachers, but also parents and politicians, are often dissatisfied with the traditional approaches in teacher education In some countries a major part of preservice teacher education has now become the responsibility of the schools, creating a situation in which to a large degree teacher education takes the form of 'training on the job'. The argument for this tendency is that traditional teacher education programs are said to fail in preparing prospective teachers for the realities of the classroom (Goodlad, 1990). Many teacher educators object that a professional teacher should acquire more than just practical tools for managing classroom situations and that it is their job to present student teachers with a broader view on education and to offer them a proper grounding in psychology, sociology, etcetera. This is what Clandinin (1995) calls \" the sacred theory-practice story \" : teacher education conceived as the translation of theory on good teaching into practice. However, many studies have shown that the transfer of theory to practice is meager or even non-existent. Zeichner and Tabachnick (1981), for example, showed that many notions and educational conceptions, developed during preservice teacher education, were \"washed out\" during field experiences. Comparable findings were reported by Cole and Knowles (1993) and Veenman (1984), who also points towards the severe problems teachers experience once they have left preservice teacher education. Lortie (1975) presented us with another early study into the socialization process of teachers, showing the dominant role of practice in shaping teacher development. At Konstanz University in Germany, research has been carried out into the phenomenon of the \"transition shock\" (Müller-Fohrbrodt et al. It showed that, during their induction in the profession, teachers encounter a huge gap between theory and practice. As a consequence, they pass through a quite distinct attitude shift during their first year of teaching, in general creating an adjustment to current practices in the schools and not to recent scientific insights into learning and teaching.", "title": "" }, { "docid": "46921a173ee1ed2a379da869060637d4", "text": "Given a table of data, existing systems can often detect basic atomic types (e.g., strings vs. numbers) for each column. A new generation of data-analytics and data-preparation systems are starting to automatically recognize rich semantic types such as date-time, email address, etc., for such metadata can bring an array of benefits including better table understanding, improved search relevance, precise data validation, and semantic data transformation. However, existing approaches only detect a limited number of types using regular-expression-like patterns, which are often inaccurate, and cannot handle rich semantic types such as credit card and ISBN numbers that encode semantic validations (e.g., checksum).\n We developed AUTOTYPE from open-source repositories like GitHub. Users only need to provide a set of positive examples for a target data type and a search keyword, our system will automatically identify relevant code, and synthesize type-detection functions using execution traces. We compiled a benchmark with 112 semantic types, out of which the proposed system can synthesize code to detect 84 such types at a high precision. Applying the synthesized type-detection logic on web table columns have also resulted in a significant increase in data types discovered compared to alternative approaches.", "title": "" }, { "docid": "c451d86c6986fab1a1c4cd81e87e6952", "text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.", "title": "" }, { "docid": "3a1bbaea6dae7f72a5276a32326884fe", "text": "Statistics suggests that there are around 40 cases per million of quadriplegia every year. Great people like Stephen Hawking have been suffering from this phenomenon. Our project attempts to make lives of the people suffering from this phenomenon simple by helping them move around on their own and not being a burden on others. The idea is to create an Eye Controlled System which enables the movement of the patient’s wheelchair depending on the movements of eyeball. A person suffering from quadriplegia can move his eyes and partially tilt his head, thus giving is an opportunity for detecting these movements. There are various kinds of interfaces developed for powered wheelchair and also there are various new techniques invented but these are costly and not affordable to the poor and needy people. In this paper, we have proposed the simpler and cost effective method of developing wheelchair. We have created a system wherein a person sitting on this automated Wheel Chair with a camera mounted on it, is able to move in a direction just by looking in that direction by making eye movements. The captured camera signals are then send to PC and controlled MATLAB, which will then be send to the Arduino circuit over the Serial Interface which in turn will control motors and allow the wheelchair to move in a particular direction. The system is affordable and hence can be used by patients spread over a large economy range. KeywordsAutomatic wheelchair, Iris Movement Detection, Servo Motor, Daugman’s algorithm, Arduino.", "title": "" }, { "docid": "4d5119db64e4e0a31064bd22b47e2534", "text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.", "title": "" }, { "docid": "a0e0d3224cd73539e01f260d564109a7", "text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.", "title": "" }, { "docid": "602077b20a691854102946757da4b287", "text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.", "title": "" }, { "docid": "34edfbe1b4e326e3ac16d5da81be2435", "text": "Median-shift is a mode seeking algorithm that relies on computing the median of local neighborhoods, instead of the mean. We further combine median-shift with Locality Sensitive Hashing (LSH) and show that the combined algorithm is suitable for clustering large scale, high dimensional data sets. In particular, we propose a new mode detection step that greatly accelerates performance. In the past, LSH was used in conjunction with mean shift only to accelerate nearest neighbor queries. Here we show that we can analyze the density of the LSH bins to quickly detect potential mode candidates and use only them to initialize the median-shift procedure. We use the median, instead of the mean (or its discrete counterpart - the medoid) because the median is more robust and because the median of a set is a point in the set. A median is well defined for scalars but there is no single agreed upon extension of the median to high dimensional data. We adopt a particular extension, known as the Tukey median, and show that it can be computed efficiently using random projections of the high dimensional data onto 1D lines, just like LSH, leading to a tightly integrated and efficient algorithm.", "title": "" }, { "docid": "6701b0ad4c53a57984504c4465bf1364", "text": "In the aftermath of recent corporate scandals, managers and researchers have turned their attention to questions of ethics management. We identify five common myths about business ethics and provide responses that are grounded in theory, research, and business examples. Although the scientific study of business ethics is relatively new, theory and research exist that can guide executives who are trying to better manage their employees' and their own ethical behavior. We recommend that ethical conduct be managed proactively via explicit ethical leadership and conscious management of the organization's ethical culture.", "title": "" }, { "docid": "f7d24a89eaa230585754ba2836140e12", "text": "The formalization of musical composition rules is a topic that has been studied for a long time. It can lead to a better understanding of the underlying processes, and provide a useful tool for musicologist to aid and speed up the analysis process. In our attempt we introduce Schoenberg’s rules from Fundamentals of Musical Composition using a specialized version of Petri nets, called Music Petri nets. Petri nets are a formal tool for studying systems that are concurrent, asynchronous, distributed, parallel, nondeterministic, and/or stochastic. We present some examples highlighting how multiple approaches to the analysis task can find counterparts in specific instances of PNs.", "title": "" }, { "docid": "ce36cc78b512a2aafee8308a3f0ebd12", "text": "BACKGROUND\nThe optimal ways of using aromatase inhibitors or tamoxifen as endocrine treatment for early breast cancer remains uncertain.\n\n\nMETHODS\nWe undertook meta-analyses of individual data on 31,920 postmenopausal women with oestrogen-receptor-positive early breast cancer in the randomised trials of 5 years of aromatase inhibitor versus 5 years of tamoxifen; of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5; and of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen. Primary outcomes were any recurrence of breast cancer, breast cancer mortality, death without recurrence, and all-cause mortality. Intention-to-treat log-rank analyses, stratified by age, nodal status, and trial, yielded aromatase inhibitor versus tamoxifen first-event rate ratios (RRs).\n\n\nFINDINGS\nIn the comparison of 5 years of aromatase inhibitor versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·64, 95% CI 0·52-0·78) and 2-4 (RR 0·80, 0·68-0·93), and non-significantly thereafter. 10-year breast cancer mortality was lower with aromatase inhibitors than tamoxifen (12·1% vs 14·2%; RR 0·85, 0·75-0·96; 2p=0·009). In the comparison of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·74, 0·62-0·89) but not while both groups received aromatase inhibitors during years 2-4, or thereafter; overall in these trials, there were fewer recurrences with 5 years of aromatase inhibitors than with tamoxifen then aromatase inhibitors (RR 0·90, 0·81-0·99; 2p=0·045), though the breast cancer mortality reduction was not significant (RR 0·89, 0·78-1·03; 2p=0·11). In the comparison of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 2-4 (RR 0·56, 0·46-0·67) but not subsequently, and 10-year breast cancer mortality was lower with switching to aromatase inhibitors than with remaining on tamoxifen (8·7% vs 10·1%; 2p=0·015). Aggregating all three types of comparison, recurrence RRs favoured aromatase inhibitors during periods when treatments differed (RR 0·70, 0·64-0·77), but not significantly thereafter (RR 0·93, 0·86-1·01; 2p=0·08). Breast cancer mortality was reduced both while treatments differed (RR 0·79, 0·67-0·92), and subsequently (RR 0·89, 0·81-0·99), and for all periods combined (RR 0·86, 0·80-0·94; 2p=0·0005). All-cause mortality was also reduced (RR 0·88, 0·82-0·94; 2p=0·0003). RRs differed little by age, body-mass index, stage, grade, progesterone receptor status, or HER2 status. There were fewer endometrial cancers with aromatase inhibitors than tamoxifen (10-year incidence 0·4% vs 1·2%; RR 0·33, 0·21-0·51) but more bone fractures (5-year risk 8·2% vs 5·5%; RR 1·42, 1·28-1·57); non-breast-cancer mortality was similar.\n\n\nINTERPRETATION\nAromatase inhibitors reduce recurrence rates by about 30% (proportionately) compared with tamoxifen while treatments differ, but not thereafter. 5 years of an aromatase inhibitor reduces 10-year breast cancer mortality rates by about 15% compared with 5 years of tamoxifen, hence by about 40% (proportionately) compared with no endocrine treatment.\n\n\nFUNDING\nCancer Research UK, Medical Research Council.", "title": "" }, { "docid": "9e5df05722eea1f31c75f18947b50321", "text": "This Recommendation covers the description of nondestructive electrochemical test methods for the estimation in large size concrete structures of the instantaneous corrosion current density, icorr, expressed in A/cm, by means of the so-called Polarization Resistance technique, Rp, in order to assess the condition of embedded steel reinforcement related to its corrosion. The values of icorr, can be used to assess the rate of degradation of concrete structures affected by reinforcement corrosion. However, they cannot give information on the actual loss in steel cross section which, at present, only can be assessed by means of direct visual observation. Values of the free corrosion potential or half-cell potential, Ecorr [V], of the embedded reinforcing steel and of the electrical concrete resistance, Re [ ], are obtained as preliminary steps of the Rp measurements. Values of the concrete resistivity, [ m], can be calculated from Re values providing the geometrical arrangement of the electrodes enables this calculation. Both parameters, Ecorr and Re (or ) may be used to complement the reliability of the icorr measurements. 2. SIGNIFICANCE AND USE", "title": "" }, { "docid": "3f179f2b05d6d92470afa16d4424777b", "text": "The main goal of this work is to present a planar director added coplanar Vivaldi antenna with improved impedance bandwidth, directivity for microwave breast imaging applications. Three stage design procedure is proposed to enhance the directivity and impedance bandwidth. Proposed antenna operates from 1.3GHz to 7.09GHz which fractional bandwidth is over 138%. The directivity of the proposed antenna is relatively enhanced at higher frequencies. The designed antenna structure has 64×70×1.6 mm3 dimensions and suitable for microwave imaging applications.", "title": "" }, { "docid": "e0a08bac6769382c3168922bdee1939d", "text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.", "title": "" }, { "docid": "b2d7bb53287a3a01aecae5a2e24af03a", "text": "To solve the problem of computational complexity in multilevel inverters due to the large number of space vectors and redundant switching states, a simple and general space vector PWM algorithm is proposed. Based on this algorithm, the location of the reference voltage vector can be easily determined and the calculation of dwell times becomes very simple. More importantly, the proposed algorithm is general and can be directly applied to the cascaded H-bridge inverter of any voltage levels. In addition, a new switching sequence, large-small alternation (LSA), is proposed for the minimization of total harmonic distortion. To verify the algorithms, a 7-level cascaded H-bridge inverter drive system was constructed and experimental results are provided.", "title": "" }, { "docid": "3e8de1702f4fd5da19175c29ad2b27ad", "text": "In this work we formulate the problem of image captioning as a multimodal translation task. Analogous to machine translation, we present a sequence-to-sequence recurrent neural networks (RNN) model for image caption generation. Different from most existing work where the whole image is represented by convolutional neural network (CNN) feature, we propose to represent the input image as a sequence of detected objects which feeds as the source sequence of the RNN model. In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model. To represent the image in a sequential way, we extract the objects features in the image and arrange them in a order using convolutional neural networks. To further leverage the visual information from the encoded objects, a sequential attention layer is introduced to selectively attend to the objects that are related to generate corresponding words in the sentences. Extensive experiments are conducted to validate the proposed approach on popular benchmark dataset, i.e., MS COCO, and the proposed model surpasses the state-of-the-art methods in all metrics following the dataset splits of previous work. The proposed approach is also evaluated by the evaluation server of MS COCO captioning challenge, and achieves very competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).", "title": "" } ]
scidocsrr
bb763cdd74fcc98d25bd5c4b84411f44
Behavior Based Manipulation : Theory and Prosecution Evidence
[ { "docid": "d308a1dfb10d538ee0bcb729dcbf2c44", "text": "I test the disposition effect, the tendency of investors to hold losing investments too long and sell winning investments too soon, by analyzing trading records for 10,000 accounts at a large discount brokerage house. These investors demonstrate a strong preference for realizing winners rather than losers. Their behavior does not appear to be motivated by a desire to rebalance portfolios, or to avoid the higher trading costs of low priced stocks. Nor is it justified by subsequent portfolio performance. For taxable investments, it is suboptimal and leads to lower after-tax returns. Tax-motivated selling is most evident in December. THE TENDENCY TO HOLD LOSERS too long and sell winners too soon has been labeled the disposition effect by Shefrin and Statman (1985). For taxable investments the disposition effect predicts that people will behave quite differently than they would if they paid attention to tax consequences. To test the disposition effect, I obtained the trading records from 1987 through 1993 for 10,000 accounts at a large discount brokerage house. An analysis of these records shows that, overall, investors realize their gains more readily than their losses. The analysis also indicates that many investors engage in taxmotivated selling, especially in December. Alternative explanations have been proposed for why investors might realize their profitable investments while retaining their losing investments. Investors may rationally, or irrationally, believe that their current losers will in the future outperform their current * University of California, Davis. This paper is based on my dissertation at the University of California, Berkeley. I would like to thank an anonymous referee, Brad Barber, Peter Klein, Hayne Leland, Richard Lyons, David Modest, John Nofsinger, James Poterba, Mark Rubinstein, Paul Ruud, Richard Sansing, Richard Thaler, Brett Trueman, and participants at the Berkeley Program in Finance, the NBER behavioral finance meeting, the Financial Management Association Conference, the American Finance Association meetings, and seminar participants at UC Berkeley, the Yale School of Management, the University of California, Davis, the University of Southern California, the University of North Carolina, Duke University, the Wharton School, Stanford University, the University of Oregon, Harvard University, the Massachusetts Institute of Technology, the Amos Tuck School, the University of Chicago, the University of British Columbia, Northwestern University, the University of Texas, UCLA, the University of Michigan, and Columbia University for helpful comments. I would also like to thank Jeremy Evnine and especially the discount brokerage house that provided the data necessary for this study. Financial support from the Nasdaq Foundation is gratefully acknowledged.", "title": "" }, { "docid": "b7944edc9e6704cbf59489f112f46c11", "text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001", "title": "" } ]
[ { "docid": "eaaf2e04adbc5ea81c30722c815c2a78", "text": "One of the constraints in the design of dry switchable adhesives is the compliance trade-off: compliant structures conform better to surfaces but are limited in strength due to high stored strain energy. In this work we study the effects of bending compliance on the shear adhesion pressures of hybrid electrostatic/gecko-like adhesives of various areas. We reaffirm that normal electrostatic preload increases contact area and show that it is more effective on compliant adhesives. We also show that the gain in contact area can compensate for low shear stiffness and adhesives with high bending compliance outperform stiffer adhesives on substrates with large scale roughness.", "title": "" }, { "docid": "cc17b3548d2224b15090ead8c398f808", "text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.", "title": "" }, { "docid": "a01a1bb4c5f6fc027384aa40e495eced", "text": "Sentiment classification of grammatical constituents can be explained in a quasicompositional way. The classification of a complex constituent is derived via the classification of its component constituents and operations on these that resemble the usual methods of compositional semantic analysis. This claim is illustrated with a description of sentiment propagation, polarity reversal, and polarity conflict resolution within various linguistic constituent types at various grammatical levels. We propose a theoretical composition model, evaluate a lexical dependency parsing post-process implementation, and estimate its impact on general NLP pipelines.", "title": "" }, { "docid": "20966efc2278b0a2129b44c774331899", "text": "In current literature, grief play in Massively Multi-player Online Role-Playing Games (MMORPGs) refers to play styles where a player intentionally disrupts the gaming experience of other players. In our study, we have discovered that player experiences may be disrupted by others unintentionally, and under certain circumstances, some will believe they have been griefed. This paper explores the meaning of grief play, and suggests that some forms of unintentional grief play be called greed play. The paper suggests that greed play be treated as griefing, but a more subtle form. It also investigates the different types of griefing and establishes a taxonomy of terms in grief play.", "title": "" }, { "docid": "31eadbc2b0548132ec45ba869bd3ab83", "text": "Electrostatic discharge (ESD) protection design for mixed-voltage I/O interfaces has been one of the key challenges of system-on-achip (SOC) implementation in nanoscale CMOS processes. The on-chip ESD protection circuit for mixed-voltage I/O interfaces should meet the gate-oxide reliability constraints and prevent the undesired leakage current paths. This paper presents an overview on the design concept and circuit implementations of ESD protection designs for mixed-voltage I/O interfaces with only low-voltage thin-oxide CMOS transistors. Especially, the ESD protection designs for mixed-voltage I/O interfaces with ESD bus and high-voltage-tolerant power-rail ESD clamp circuits are presented and discussed. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6610f89ba1776501d6c0d789703deb4e", "text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.", "title": "" }, { "docid": "1ca19851a7e013d59d139b6b5fe7177c", "text": "A novel technique has been developed at DRDC Ottawa for fusing electronic warfare (EW) sensor data by numerically combining the probability density function representing the measured value and error estimate provided by each sensor. Multiple measurements are sampled at common discrete intervals to form a probability density grid and combined to produce the fused estimate of the measured parameter. This technique, called the discrete probability density (DPD) method, is used to combine sensor measurements taken from different locations for the EW function of emitter geolocation. Results are presented using simulated line of bearing measurements and are shown to approach the theoretical location accuracy limit predicted by the Cramer-Rao lower bound. The DPD method is proposed for fusing other geolocation sensor data including time of arrival, time difference of arrival, and a priori information.", "title": "" }, { "docid": "d89f10b6df65f5a40bc33cac064e3cdd", "text": "In this paper we provide empirical evidence that using humanlike gaze cues during human-robot handovers can improve the timing and perceived quality of the handover event. Handovers serve as the foundation of many human-robot tasks. Fluent, legible handover interactions require appropriate nonverbal cues to signal handover intent, location and timing. Inspired by observations of human-human handovers, we implemented gaze behaviors on a PR2 humanoid robot. The robot handed over water bottles to a total of 102 naïve subjects while varying its gaze behaviour: no gaze, gaze designed to elicit shared attention at the handover location, and the shared attention gaze complemented with a turn-taking cue. We compared subject perception of and reaction time to the robot-initiated handovers across the three gaze conditions. Results indicate that subjects reach for the offered object significantly earlier when a robot provides a shared attention gaze cue during a handover. We also observed a statistical trend of subjects preferring handovers with turn-taking gaze cues over the other conditions. Our work demonstrates that gaze can play a key role in improving user experience of human-robot handovers, and help make handovers fast and fluent.", "title": "" }, { "docid": "921b4ecaed69d7396285909bd53a3790", "text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.", "title": "" }, { "docid": "3d3a4cd96a349a7ebbaf168a1685e0d8", "text": "We consider influence maximization (IM) in social networks, which is the problem of maximizing the number of users that become aware of a product by selecting a set of “seed” users to expose the product to. While prior work assumes a known model of information diffusion, we propose a parametrization in terms of pairwise reachability which makes our framework agnostic to the underlying diffusion model. We give a corresponding monotone, submodular surrogate function, and show that it is a good approximation to the original IM objective. We also consider the case of a new marketer looking to exploit an existing social network, while simultaneously learning the factors governing information propagation. For this, we propose a pairwise-influence semi-bandit feedback model and develop a LinUCB-based bandit algorithm. Our model-independent regret analysis shows that our bound on the cumulative regret has a better (as compared to previous work) dependence on the size of the network. By using the graph Laplacian eigenbasis to construct features, we describe a practical LinUCB implementation. Experimental evaluation suggests that our framework is robust to the underlying diffusion model and can efficiently learn a near-optimal solution.", "title": "" }, { "docid": "c2fee2767395b1e9d6490956c7a23268", "text": "In this paper, we elaborate the advantages of combining two neural network methodologies, convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent neural networks, with the framework of hybrid hidden Markov models (HMM) for recognizing offline handwriting text. CNNs employ shift-invariant filters to generate discriminative features within neural networks. We show that CNNs are powerful tools to extract general purpose features that even work well for unknown classes. We evaluate our system on a Chinese handwritten text database and provide a GPU-based implementation that can be used to reproduce the experiments. All experiments were conducted with RWTH OCR, an open-source system developed at our institute.", "title": "" }, { "docid": "207e90cebdf23fb37f10b5ed690cb4fc", "text": "In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic. Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines. In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries. The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies. It allows retrieving semantically relevant articles given a limited known variation of search terms. In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.", "title": "" }, { "docid": "392d6bf78f8a8a59f08d8102cec3ea91", "text": "Cancellous and cortical autografts histologically have three differences: (1) cancellous grafts are revascularized more rapidly and completely than cortical grafts; (2) creeping substitution of cancellous bone initially involves an appositional bone formation phase, followed by a resorptive phase, whereas cortical grafts undergo a reverse creeping substitution process; (3) cancellous grafts tend to repair completely with time, whereas cortical grafts remain as admixtures of necrotic and viable bone. Physiologic skeletal metabolic factors influence the rate, amount, and completeness of bone repair and graft incorporation. The mechanical strengths of cancellous and cortical grafts are correlated with their respective repair processes: cancellous grafts tend to be strengthened first, whereas cortical grafts are weakened. Bone allografts are influenced by the same immunologic factors as other tissue grafts. Fresh bone allografts may be rejected by the host's immune system. The histoincompatibility antigens of bone allografts are presumably the proteins or glycoproteins on cell surfaces. The matrix proteins may or may not elicit graft rejection. The rejection of a bone allograft is considered to be a cellular rather than a humoral response, although the humoral component may play a part. The degree of the host response to an allograft may be related to the antigen concentration and total dose. The rejection of a bone allograft is histologically expressed by the disruption of vessels, an inflammatory process including lymphocytes, fibrous encapsulation, peripheral graft resorption, callus bridging, nonunions, and fatigue fractures.", "title": "" }, { "docid": "f70f825996544350b21177246cb39803", "text": "The goal of our work is to develop an efficient, automatic algorithm for discovering point correspondences between surfaces that are approximately and/or partially isometric.\n Our approach is based on three observations. First, isometries are a subset of the Möbius group, which has low-dimensionality -- six degrees of freedom for topological spheres, and three for topological discs. Second, computing the Möbius transformation that interpolates any three points can be computed in closed-form after a mid-edge flattening to the complex plane. Third, deviations from isometry can be modeled by a transportation-type distance between corresponding points in that plane.\n Motivated by these observations, we have developed a Möbius Voting algorithm that iteratively: 1) samples a triplet of three random points from each of two point sets, 2) uses the Möbius transformations defined by those triplets to map both point sets into a canonical coordinate frame on the complex plane, and 3) produces \"votes\" for predicted correspondences between the mutually closest points with magnitude representing their estimated deviation from isometry. The result of this process is a fuzzy correspondence matrix, which is converted to a permutation matrix with simple matrix operations and output as a discrete set of point correspondences with confidence values.\n The main advantage of this algorithm is that it can find intrinsic point correspondences in cases of extreme deformation. During experiments with a variety of data sets, we find that it is able to find dozens of point correspondences between different object types in different poses fully automatically.", "title": "" }, { "docid": "07713323e19b00c93a21a3d121c0039b", "text": "A CMOS nested-chopper instrumentation amplifier is presented with a typical offset of 100 nV. This performance is obtained by nesting an additional low-frequency chopper pair around a conventional chopper amplifier. The inner chopper pair removes the 1/f noise, while the outer chopper pair reduces the residual offset. The test chip is free from 1/f noise and has a thermal noise of 27 nV//spl radic/Hz consuming a total supply current of 200 /spl mu/A.", "title": "" }, { "docid": "4aa96113ad29f737fbbf82f97b558211", "text": "The null vector method, based on a simple linear algebraic concept, is proposed as a solution to the phase retrieval problem. In the case with complex Gaussian random measurement matrices, a non-asymptotic error bound is derived, yielding an asymptotic regime of accurate approximation comparable to that for the spectral vector method.", "title": "" }, { "docid": "bd700aba43a8a8de5615aa1b9ca595a7", "text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud", "title": "" }, { "docid": "07837d90a558efbbf859c9f77db90e46", "text": "In this paper, we propose a single image super-resolution and enhancement algorithm using local fractal analysis. If we treat the pixels of a natural image as a fractal set, the image gradient can then be regarded as a measure of the fractal set. According to the scale invariance (a special case of bi-Lipschitz invariance) feature of fractal dimension, we will be able to estimate the gradient of a high-resolution image from that of a low-resolution one. Moreover, the high-resolution image can be further enhanced by preserving the local fractal length of gradient during the up-sampling process. We show that a regularization term based on the scale invariance of fractal dimension and length can be effective in recovering details of the high-resolution image. Analysis is provided on the relation and difference among the proposed approach and some other state of the art interpolation methods. Experimental results show that the proposed method has superior super-resolution and enhancement results as compared to other competitors.", "title": "" }, { "docid": "b8a5d42e3ca09ac236414cd0081f5d48", "text": "Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.", "title": "" } ]
scidocsrr
70bed5386ad65387292643119b08ee2e
Fast content-based image retrieval using convolutional neural network and hash function
[ { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "1c11472572758b6f831349ebf6443ad5", "text": "In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.", "title": "" } ]
[ { "docid": "682ac189fe3fdcb602e1a361f957220a", "text": "Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems. While numerous technologies have been developed for supporting event-based interactions over local-area networks, these technologies do not scale well to wide-area networks such as the Internet. Wide-area networks pose new challenges that have to be attacked with solutions that specifically address issues of scalability. This paper presents Siena, a scalable event notification service that is based on a distributed architecture of event servers. We first present a formally defined interface that is based on an extension to the publish/subscribe protocol. We then describe and compare several different server topologies and routing algorithms. We conclude by briefly discussing related work, our experience with an initial implementation of Siena, and a framework for evaluating the scalability of event notification services such as Siena.", "title": "" }, { "docid": "8cb7b217d74e16b9442b5024cb5ea083", "text": "Users and uses of internet is growing tremendously these days which causing an extreme trouble and efforts at user side to get web pages searched which are as per concern and relevant to user's requirement Generally users approach to search web pages from a large available hierarchy of concepts or use a query to browse web pages from available search engine and receive results based on search pattern where few of the results are relevant to search and most of them are not. Web crawler plays an important role in search engine and act as a key element when performance is considered. This paper includes domain engineering concept and keyword driven crawling with relevancy decision mechanism and uses Ontology concepts which ensures the best path for improving crawler's performance. This paper introduces extraction of URLs based on keyword or search criteria. It extracts URLs for web pages which contains searched keyword in their content and considers such pages only as important and doesn't download web pages irrelevant to search. It offers high optimality comparing with traditional web crawler and can enhance search efficiency with more accuracy.", "title": "" }, { "docid": "908e2a94523743a90a57f9419fef8d28", "text": "Heart rate variability (HRV) is generated by the interaction of multiple regulatory mechanisms that operate on different time scales. This article examines the regulation of the heart, the meaning of HRV, Thayer and Lane’s neurovisceral integration model, the sources of HRV, HRV frequency and time domain measurements, Porges’s polyvagal theory, and resonance frequency breathing. The medical implications of HRV biofeedback for cardiovascular rehabilitation and inflammatory disorders are considered.", "title": "" }, { "docid": "df8c06db4d135c1b28b6445688e7e51d", "text": "Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMSRC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMSRC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "ca1aeb2730eb11844d0dde46cf15de4e", "text": "Knowledge of the bio-impedance and its equivalent circuit model at the electrode-electrolyte/tissue interface is important in the application of functional electrical stimulation. Impedance can be used as a merit to evaluate the proximity between electrodes and targeted tissues. Understanding the equivalent circuit parameters of the electrode can further be leveraged to set a safe boundary for stimulus parameters in order not to exceed the water window of electrodes. In this paper, we present an impedance characterization technique and implement a proof-of-concept system using an implantable neural stimulator and an off-the-shelf microcontroller. The proposed technique yields the parameters of the equivalent circuit of an electrode through large signal analysis by injecting a single low-intensity biphasic current stimulus with deliberately inserted inter-pulse delay and by acquiring the transient electrode voltage at three well-specified timings. Using low-intensity stimulus allows the derivation of electrode double layer capacitance since capacitive charge-injection dominates when electrode overpotential is small. Insertion of the inter-pulse delay creates a controlled discharge time to estimate the Faradic resistance. The proposed method has been validated by measuring the impedance of a) an emulated Randles cells made of discrete circuit components and b) a custom-made platinum electrode array in-vitro, and comparing estimated parameters with the results derived from an impedance analyzer. The proposed technique can be integrated into implantable or commercial neural stimulator system at low extra power consumption, low extra-hardware cost, and light computation.", "title": "" }, { "docid": "9f15297a7eab4084fa7d17b618d82a02", "text": "Purpose – The purpose of this study is to update a global ranking of knowledge management and intellectual capital (KM/IC) academic journals. Design/methodology/approach – Two different approaches were utilized: a survey of 379 active KM/IC researchers; and the journal citation impact method. Scores produced by the application of these methods were combined to develop the final ranking. Findings – Twenty-five KM/IC-centric journals were identified and ranked. The top six journals are: Journal of Knowledge Management, Journal of Intellectual Capital, The Learning Organization, Knowledge Management Research & Practice, Knowledge and Process Management and International Journal of Knowledge Management. Knowledge Management Research & Practice has substantially improved its reputation. The Learning Organization and Journal of Intellectual Capital retained their previous positions due to their strong citation impact. The number of KM/IC-centric and KM/IC-relevant journals has been growing at the pace of one new journal launch per year. This demonstrates that KM/IC is not a scientific fad; instead, the discipline is progressing towards academic maturity and recognition. Practical implications – The developed ranking may be used by various stakeholders, including journal editors, publishers, reviewers, researchers, new scholars, students, policymakers, university administrators, librarians and practitioners. It is a useful tool to further promote the KM/IC discipline and develop its unique identity. It is important for all KM/IC journals to become included in Thomson Reuters’ Journal Citation Reports. Originality/value – This is the most up-to-date ranking of KM/IC journals.", "title": "" }, { "docid": "da28960f4a5daeb80aa5c344db326c8d", "text": "Adaptive traffic signal control, which adjusts traffic signal timing according to real-time traffic, has been shown to be an effective method to reduce traffic congestion. Available works on adaptive traffic signal control make responsive traffic signal control decisions based on human-crafted features (e.g. vehicle queue length). However, human-crafted features are abstractions of raw traffic data (e.g., position and speed of vehicles), which ignore some useful traffic information and lead to suboptimal traffic signal controls. In this paper, we propose a deep reinforcement learning algorithm that automatically extracts all useful features (machine-crafted features) from raw real-time traffic data and learns the optimal policy for adaptive traffic signal control. To improve algorithm stability, we adopt experience replay and target network mechanisms. Simulation results show that our algorithm reduces vehicle delay by up to 47% and 86% when compared to another two popular traffic signal control algorithms, longest queue first algorithm and fixed time control algorithm, respectively.", "title": "" }, { "docid": "e5edb616b5d0664cf8108127b0f8684c", "text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.", "title": "" }, { "docid": "08fee0a21076c8a1d65eb7fc0f88610f", "text": "We propose Smells Phishy?, a board game that contributes to raising users' awareness of online phishing scams. We designed and developed the board game and conducted user testing with 21 participants. The results showed that after playing the game, participants had better understanding of phishing scams and learnt how to better protect themselves. Participants enjoyed playing the game and said that it was a fun and exciting experience. The game increased knowledge and awareness, and encouraged discussion.", "title": "" }, { "docid": "cf17e85e27a333b1c724385d92e227e5", "text": "Previous research has shown that light tactile contact increases compliance to a wide variety of requests. However, the effect of touch on compliance to a courtship request has never been studied. In this paper, three experiments were conducted in a courtship context. In the first experiment, a young male confederate in a nightclub asked young women to dance with him during the period when slow songs were played. When formulating his request, the confederate touched (or not) the young woman on her forearm for 1 or 2 seconds. In the second experiment, a 20-year-old confederate approached a young woman in the street and asked her for her phone number. The request was again accompanied by a light touch (or not) on the young woman’s forearm. In both experiments, it was found that touch increased compliance to the man’s request. A replication of the second experiment accompanied with a survey administered to the female showed that high score of dominance was associated with tactile contact. The link between touch and the dominant position of the male was used to explain these results theoretically.", "title": "" }, { "docid": "489015cc236bd20f9b2b40142e4b5859", "text": "We present an experimental study which demonstrates that model checking techniques can be effective in finding synchronization errors in safety critical software when they are combined with a design for verification approach. We apply the concurrency controller design pattern to the implementation of the synchronization operations in Java programs. This pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller can be verified with respect to arbitrary numbers of threads using infinite state model checking techniques, and the threads which use the controller classes can be checked for interface violations using finite state model checking techniques. We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted an experimental study investigating the effectiveness of the presented design for verification approach on safety critical air traffic control software. In this study, we first reengineered the Tactical Separation Assisted Flight Environment (TSAFE) software using the concurrency controller design pattern. Then, using fault seeding, we created 40 faulty versions of TSAFE and used both infinite and finite state verification techniques for finding the seeded faults. The experimental study demonstrated the effectiveness of the presented modular verification approach and resulted in a classification of faults that can be found using the presented approach.", "title": "" }, { "docid": "c8daa2571cd7808664d3dbe775cf60ab", "text": "OBJECTIVE\nTo review the research addressing the relationship of childhood trauma to psychosis and schizophrenia, and to discuss the theoretical and clinical implications.\n\n\nMETHOD\nRelevant studies and previous review papers were identified via computer literature searches.\n\n\nRESULTS\nSymptoms considered indicative of psychosis and schizophrenia, particularly hallucinations, are at least as strongly related to childhood abuse and neglect as many other mental health problems. Recent large-scale general population studies indicate the relationship is a causal one, with a dose-effect.\n\n\nCONCLUSION\nSeveral psychological and biological mechanisms by which childhood trauma increases risk for psychosis merit attention. Integration of these different levels of analysis may stimulate a more genuinely integrated bio-psycho-social model of psychosis than currently prevails. Clinical implications include the need for staff training in asking about abuse and the need to offer appropriate psychosocial treatments to patients who have been abused or neglected as children. Prevention issues are also identified.", "title": "" }, { "docid": "b420be5b34185e4604f22b038a605c92", "text": "Computer networks are inherently social networks, linking people, organizations, and knowledge. They are social institutions that should not be studied in isolation but as integrated into everyday lives. The proliferation of computer networks has facilitated a deemphasis on group solidarities at work and in the community and afforded a turn to networked societies that are loosely bounded and sparsely knit. The Internet increases people's social capital, increasing contact with friends and relatives who live nearby and far away. New tools must be developed to help people navigate and find knowledge in complex, fragmented, networked societies.", "title": "" }, { "docid": "bf7cd2303c325968879da72966054427", "text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.", "title": "" }, { "docid": "be4f91a03afd3a90523366403254aeff", "text": "Today, it is generally accepted that sprint performance, like endurance performance, can improve considerably with training. Strength training, especially, plays a key role in this process. Sprint performance will be viewed multidimensionally as an initial acceleration phase (0 to 10 m), a phase of maximum running speed (36 to 100 m) and a transition phase in between. Immediately following the start action, the powerful extensions of the hip, knee and ankle joints are the main accelerators of body mass. However, the hamstrings, the m. adductor magnus and the m. gluteus maximus are considered to make the most important contribution in producing the highest levels of speed. Different training methods are proposed to improve the power output of these muscles. Some of them aim for hypertrophy and others for specific adaptations of the nervous system. This includes general (hypertrophy and neuronal activation), velocity specific (speed-strength) and movement specific (sprint associated exercises) strength training. In developing training strategies, the coach has to keep in mind that strength, power and speed are inherently related to one another, because they are all the output of the same functional systems. As heavy resistance training results in a fibre type IIb into fibre type IIa conversion, the coach has to aim for an optimal balance between sprint specific and nonspecific training components. To achieve this they must take into consideration the specific strength training demands of each individual, based on performance capacity in each specific phase of the sprint.", "title": "" }, { "docid": "73b76fa13443a4c285dc9a97cfaa22dd", "text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.", "title": "" }, { "docid": "5f120ae2429d7b3c8085f96a63eae817", "text": "Background: Antenatal mothers with anemia are high risk to varieties of health implications as well as to their off springs. Many studies show a high mortality and morbidity related to anemia in pregnancy. Methods: This cross-sectional study was designed to determine factors associated with anemia amongst forty seven antenatal mothers attending Antenatal Clinic at Klinik Kesihatan Kuala Besut, Terengganu in November 2009. Systematic random sampling was applied and information gathered based on patients’ medical records and through face-to-face interviewed by using a structured questionnaire. Results: The mean age of respondents was 28.3 year-old. More than half of mothers were multigravidas. Of 47 respondents, 57.4% (95% CI: 43.0, 72.0) was anemic. The proportion of anemia was high for grand multigravidas mother (66.7%), those at third trimester of pregnancy (70.4%), did antenatal booking at first trimester (65.4%), poor haematinic compliance (76.5%), not taking any medication (60.5%), those with no co-morbid illnesses (60.0%), mothers with high education level (71.4%) and those with satisfactory monthly income (61.5%). The proportion of anemia was 58.3% and 57.1% for mothers with last child birth spacing of two years or less and more than two years accordingly. There was a significant association of haematinic compliance with the anemia (OR: 4.571; 95% CI: 1.068, 19.573). Conclusions: Antenatal mothers in this area have a substantial proportion of anemia despite of freely and routinely prescription of haematinic at primary health care centers. Poor haematinic compliance was a significant risk factor. Health education programs regarding haematinic compliance and adequate intake of iron rich diet during pregnancy need to be strengthened to curb this problem. *Corresponding author: NH Nik Rosmawati, Environmental Health Unit, Department of Community Medicine, School of Medical Science, Health Campus, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia, E-mail: rosmawati@kk.usm.my Received May 03, 2012; Accepted May 24, 2012; Published May 26, 2012 Citation: Nik Rosmawati NH, Mohd Nazri S, Mohd Ismail I (2012) The Rate and Risk Factors for Anemia among Pregnant Mothers in Jerteh Terengganu, Malaysia. J Community Med Health Educ 2:150. doi:10.4172/2161-0711.1000150 Copyright: © 2012 Nik Rosmawati NH, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "c11ac0c3e873e13a411ccfd7e271be7c", "text": "Recommender systems show increasingly importance with the development of E-commerce, news and multimedia applications. Traditional recommendation algorithms such as collaborative-filtering-based methods and graph-based methods mainly use items’ original attributes and relationships between items and users, ignoring items’ chronological order in browsing sessions. In recent years, RNN-based methods show their superiority when dealing with the sequential data, and some modified RNN models have been proposed. However, these RNN models only use the sequence order of items and neglect items’ browsing time information. It is widely accepted that users tend to spend more time on their interested items, and these interested items are always closely related to users’ current target. Based on the above view, items’ browsing time is an important feature in recommendations. In this paper, we propose a modified RNN-based recommender system called TA4Rec, which can recommend the probable item that may be clicked in the next moment. Our main contribution is to introduce a method to calculate the time-attention factors from browsing items’ duration time and add time-attention factors to the RNN-based model. We conduct experiments on RecSys Challenge 2015 dataset and the result shows that TA4REC model has gained obvious improvement on session-based recommendations than the classic session-based recommender method.", "title": "" }, { "docid": "c22c34214e0f3c4d80be81d706233f96", "text": "An alternating-current light-emitting diode (AC-LED) driver is implemented between the grid and lamp to eliminate the disadvantages of a directly grid-tied AC-LED lamp. In order to highlight the benefits of AC-LED technology, a single-stage converter with few components is adopted. A high power-factor (PF) single-stage bridgeless AC/AC converter is proposed with higher efficiency, greater power factor, less harmonics to pass IEC 61000-3-2 class C, and better regulation of output current. The brightness and flicker frequency issues caused by a low-frequency sinusoidal input are surpassed by the implementation of a high-frequency square-wave output current. In addition, the characteristics of the proposed circuit are discussed and analyzed in order to design the AC-LED driver. Finally, some simulation and experimental results are shown to verify this proposed scheme.", "title": "" } ]
scidocsrr
a75da31aead4843c4b2e68c4f0908c4f
Anticipatory Functions, Digital-Analog Forms and Biosemiotics: Integrating the Tools to Model Information and Normativity in Autonomous Biological Agents
[ { "docid": "3b59cba24060a857d3f99f2e7c6e8198", "text": "Terms loaded with informational connotations are often employed to refer to genes and their dynamics. Indeed, genes are usually perceived by biologists as basically ‘the carriers of hereditary information.’ Nevertheless, a number of researchers consider such talk as inadequate and ‘just metaphorical,’ thus expressing a skepticism about the use of the term ‘information’ and its derivatives in biology as a natural science. First, because the meaning of that term in biology is not as precise as it is, for instance, in the mathematical theory of communication. Second, because it seems to refer to a purported semantic property of genes without theoretically clarifying if any genuinely intrinsic semantics is involved. Biosemiotics, a field that attempts to analyze biological systems as semiotic systems, makes it possible to advance in the understanding of the concept of information in biology. From the perspective of Peircean biosemiotics, we develop here an account of genes as signs, including a detailed analysis of two fundamental processes in the genetic information system (transcription and protein synthesis) that have not been made so far in this field of research. Furthermore, we propose here an account of information based on Peircean semiotics and apply it to our analysis of transcription and protein synthesis.", "title": "" } ]
[ { "docid": "aee8080bb0a1c9de2eec907de095f1f9", "text": "PURPOSE OF REVIEW\nCranioplasty has been long practiced, and the reconstructive techniques continue to evolve. With a variety of options available for filling cranial defects, a review of the current practices in cranioplasty allows for reporting the most advanced techniques and specific indications.\n\n\nRECENT FINDINGS\nOverwhelming support remains for the use of autologous bone grafts in filling the cranial defects. Alloplastic alternatives have relative advantages and disadvantages depending on the patient population and specific indications. Application of imaging technology has allowed for the utilization of custom-made alloplastic implants when autologous bone grafts are not feasible.\n\n\nSUMMARY\nAutologous bone grafts remain the best option for adult and pediatric patients with viable donor sites and small-to-medium defects. Large defects in the adult population can be reconstructed with titanium mesh and polymethylmethacrylate overlay with or without the use of computer-assisted design and manufacturing customization. In pediatric patients, exchange cranioplasty offers a viable technique for using an autologous bone graft, while simultaneously filling the donor site with particulate bone graft. Advances in alloplastic materials and custom manufacturing of implants will have an important influence on cranioplasty techniques in the years to come.", "title": "" }, { "docid": "65385d7aee49806476dc913f6768fc43", "text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.", "title": "" }, { "docid": "b61cbc2f453494a2d7c32d64726d0946", "text": "There is a gap between the theory and practice of distributed systems in terms of the use of time. The theory of distributed systems shunned the notion of time, and introduced “causality tracking” as a clean abstraction to reason about concurrency. The practical systems employed physical time (NTP) information but in a best effort manner due to the difficulty of achieving tight clock synchronization. In an effort to bridge this gap and reconcile the theory and practice of distributed systems on the topic of time, we propose a hybrid logical clock, HLC, that combines the best of logical clocks and physical clocks. HLC captures the causality relationship like logical clocks, and enables easy identification of consistent snapshots in distributed systems. Dually, HLC can be used in lieu of physical/NTP clocks since it maintains its logical clock to be always close to the NTP clock. Moreover HLC fits in to 64 bits NTP timestamp format, and is masking tolerant to NTP kinks and uncertainties. We show that HLC has many benefits for wait-free transaction ordering and performing snapshot reads in multiversion globally distributed databases.", "title": "" }, { "docid": "92291c0d91224ec330d5dba9be118f16", "text": "Constraint-based refactoring tools as currently implemented generate their required constraint sets from the programs to be refactored, before any changes are performed. Constraint generation is thus unable to see — and regard — the changed structure of the refactored program, although this new structure may give rise to new constraints that need to be satisfied for the program to maintain its original behaviour. To address this problem, we present a framework allowing the constraint-generation process to foresee all changes a refactoring might perform, generating — at the outset of the refactoring — all constraints necessary to constrain these changes. As we are able to demonstrate, the computational overhead imposed by our framework, although threatening viability in theory, can be reduced to tractable sizes.", "title": "" }, { "docid": "d229250779ccef56e0e61cfacdf6f199", "text": "Much of the research in facility layout has focused on static layouts where the material handling flow is assumed to be constant during the planning horizon. But in today’s market-based, dynamic environment, layout rearrangement may be required during the planning horizon to maintain layout effectiveness. A few algorithms have been proposed to solve this problem. They include dynamic programming and pair-wise exchange. In this paper we propose an improved dynamic pair-wise exchange heuristic based on a previous method published in this journal. Tests show that the proposed method is effective and efficient.", "title": "" }, { "docid": "6573162f8feacae5f121f69780534527", "text": "Larger fields in the Middle-size league as well as the effort to build mixed teams from different universities require a simulation environment which is capable to physically correctly simulate the robots and the environment. A standardized simulation environment has not yet been proposed for this league. In this paper we present our simulation environment, which is based on the Gazebo system. We show how typical Middle-size robots with features like omni-drives and omni-directional cameras can be modeled with relative ease. In particular, the control software for the real robots can be used with few changes, thus facilitating the transfer of results obtained in simulation back to the robots. We address some technical issues such as adapting time-triggered events in the robot control software to the simulation, and we introduce the concept of multi-level abstractions. The latter allows switching between faithful but computionally expensive sensor models and abstract but cheap approximations. These abstractions are needed especially when simulating whole teams of robots.", "title": "" }, { "docid": "4a3042f91b9779bf0cfa92386bb06044", "text": "Attributes are semantically meaningful characteristics whose applicability widely crosses category boundaries. They are particularly important in describing and recognizing concepts where no explicit training example is given, e.g., zero-shot learning. Additionally, since attributes are human describable, they can be used for efficient human-computer interaction. In this paper, we propose to employ semantic segmentation to improve facial attribute prediction. The core idea lies in the fact that many facial attributes describe local properties. In other words, the probability of an attribute to appear in a face image is far from being uniform in the spatial domain. We build our facial attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. As a result of this approach, in addition to recognition, we are able to localize the attributes, despite merely having access to image level labels (weak supervision) during training. We evaluate our proposed method on CelebA and LFWA datasets and achieve superior results to the prior arts. Furthermore, we show that in the reverse problem, semantic face parsing improves when facial attributes are available. That reaffirms the need to jointly model these two interconnected tasks.", "title": "" }, { "docid": "073486fe6bcd756af5f5325b27c57912", "text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.", "title": "" }, { "docid": "691f33118b6b03aaa0f7dc0de18b9b3d", "text": "Clinical Decision Support (CDS) can be regarded as an information retrieval (IR) task, where medical records are used to retrieve the full-text biomedical articles to satisfy the information needs from physicians, aiming at better medical solutions. Recent attempts have introduced the advances of deep learning by employing neural IR methods for CDS, where, however, only the document-query relationship is modeled, resulting in non-optimal results in that a medial record can barely reflect the information included in a relevant biomedical article which is usually much longer. Therefore, in addition to the document-query relationship, we propose a document-based neural relevance model (DNRM), addressing the mismatch by utilizing the content of relevant articles to complement the medical records. Specifically, our DNRM model evaluates a document relative to a query and to several pseudo relevant documents for the query at the same time, capturing the interactions from both parts with a feed forward network. Experimental results on the standard Text REtrieval Conference (TREC) CDS track dataset confirm the superior performance of the proposed DNRM model.", "title": "" }, { "docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "3dbd27e460fd9d3d80967c8215e7cb29", "text": "Transmission line sag, tension and conductor length varies with the variation of temperature due to thermal expansion and elastic elongation. Beside thermal effect, wind pressure and ice accumulation creates a horizontal and vertical loading on the conductor respectively. Such changes make the calculated data uncertain and require an uncertainty model. A novel affine arithmetic (AA) based transmission line sag, tension and conductor length calculation for parabolic curve is proposed and the proposed method is tested for different test cases. The results are compared with Monte Carlo (MC) and interval arithmetic (IA) methods. The AA based result gives a more conservative bound than MC and IA method in all the cases.", "title": "" }, { "docid": "faa1a49f949d5ba997f4285ef2e708b2", "text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.", "title": "" }, { "docid": "c7e22f53b86959c1bad9cbf405f6bd01", "text": "The use of an electromechanical valve actuator (EMVA) formed by two magnets and two balanced springs is a promising tool to implement innovative engine management strategies. This actuator needs to be properly controlled to reduce impact velocities during engine valve operations, but the use of a position sensor for each valve is not possible for cost reasons. It is therefore essential to find sensorless solutions based on increasingly predictive models of such a mechatronic actuator. To address this task, in this paper, we present an in-depth lumped parameter model of an EMVA based on a hybrid analytical-finite-element method (FEM) approach. The idea is to develop a model of EMVA embedding the well-known predictive behavior of FEM models. All FEM data are then fitted to a smooth curve that renders unknown magnetic quantities in analytical form. In this regard, we select a single-wise function that is able to describe global magnetic quantities as the flux linkage and force both for linear and saturation working regions of the materials. The model intrinsically describes all mutual effects between two magnets. The goodness of the dynamic behavior of the model is finally tested on a series of transient FEM simulations of the actuator in different working conditions.", "title": "" }, { "docid": "d5b070db8330db88c4b6ecfb8e370e09", "text": "The pit organs of elasmobranchs (sharks, skates and rays) are free neuromasts of the mechanosensory lateral line system. Pit organs, however, appear to have some structural differences from the free neuromasts of bony fishes and amphibians. In this study, the morphology of pit organs was investigated by scanning electron microscopy in six shark and three ray species. In each species, pit organs contained typical lateral line hair cells with apical stereovilli of different lengths arranged in an \"organ-pipe\" configuration. Supporting cells also bore numerous apical microvilli taller than those observed in other vertebrate lateral line organs. Pit organs were either covered by overlapping denticles, located in open grooves bordered by denticles, or in grooves without associated denticles. The possible functional implications of these morphological features, including modification of water flow and sensory filtering properties, are discussed.", "title": "" }, { "docid": "8e3eec62b02a9cf7a56803775757925f", "text": "Emotional states of individuals, also known as moods, are central to the expression of thoughts, ideas and opinions, and in turn impact attitudes and behavior. As social media tools are increasingly used by individuals to broadcast their day-to-day happenings, or to report on an external event of interest, understanding the rich ‘landscape’ of moods will help us better interpret and make sense of the behavior of millions of individuals. Motivated by literature in psychology, we study a popular representation of human mood landscape, known as the ‘circumplex model’ that characterizes affective experience through two dimensions: valence and activation. We identify more than 200 moods frequent on Twitter, through mechanical turk studies and psychology literature sources, and report on four aspects of mood expression: the relationship between (1) moods and usage levels, including linguistic diversity of shared content (2) moods and the social ties individuals form, (3) moods and amount of network activity of individuals, and (4) moods and participatory patterns of individuals such as link sharing and conversational engagement. Our results provide at-scale naturalistic assessments and extensions of existing conceptualizations of human mood in social media contexts.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "53b1ac64f63cab0d99092764eed4f829", "text": "We present a new unsupervised topic discovery model for a collection of text documents. In contrast to the majority of the state-of-the-art topic models, our model does not break the document's structure such as paragraphs and sentences. In addition, it preserves word order in the document. As a result, it can generate two levels of topics of different granularity, namely, segment-topics and word-topics. In addition, it can generate n-gram words in each topic. We also develop an approximate inference scheme using Gibbs sampling method. We conduct extensive experiments using publicly available data from different collections and show that our model improves the quality of several text mining tasks such as the ability to support fine grained topics with n-gram words in the correlation graph, the ability to segment a document into topically coherent sections, document classification, and document likelihood estimation.", "title": "" }, { "docid": "eb7582d78766ce274ba899ad2219931f", "text": "BACKGROUND\nPrecise determination of breast volume facilitates reconstructive procedures and helps in the planning of tissue removal for breast reduction surgery. Various methods currently used to measure breast size are limited by technical drawbacks and unreliable volume determinations. The purpose of this study was to develop a formula to predict breast volume based on straightforward anthropomorphic measurements.\n\n\nMETHODS\nOne hundred one women participated in this study. Eleven anthropomorphic measurements were obtained on 202 breasts. Breast volumes were determined using a water displacement technique. Multiple stepwise linear regression was used to determine predictive variables and a unifying formula.\n\n\nRESULTS\nMean patient age was 37.7 years, with a mean body mass index of 31.8. Mean breast volumes on the right and left sides were 1328 and 1305 cc, respectively (range, 330 to 2600 cc). The final regression model incorporated the variables of breast base circumference in a standing position and a vertical measurement from the inframammary fold to a point representing the projection of the fold onto the anterior surface of the breast. The derived formula showed an adjusted R of 0.89, indicating that almost 90 percent of the variation in breast size was explained by the model.\n\n\nCONCLUSION\nSurgeons may find this formula a practical and relatively accurate method of determining breast volume.", "title": "" } ]
scidocsrr
a2cfab8ceb503f61d6704a953b580b88
Document Summarization Based on Data Reconstruction
[ { "docid": "ef444570c043be67453317e26600972f", "text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.", "title": "" } ]
[ { "docid": "d30cdd113970fa8570a795af6b5193e1", "text": "Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW’s effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW.", "title": "" }, { "docid": "ee07cf061a1a3b7283c22434dcabd4eb", "text": "Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.", "title": "" }, { "docid": "e6472334285844f89a6174fc4833ae59", "text": "The data collection is an important phase in the internet of things (IoT). The efficient way to collect data from the IoT environment is one of the challenges for the future of IoT. In this paper, we first design a data collection system for IoT. This system is based Bluetooth Low Energy (BLE) technology to forward the collected data from the data collector to the data gateways and proposes the use of smart phones as data collectors. We also propose a first prototype of this system. This prototype was developed with the help of two well-known open technologies, Arduino (Bluno) and Android. Finally, we evaluate the performances of the proposed system through performing a set of experiments on the developed prototype. The experiments show the feasibility and the applicability of the system especially where distance between the data collector and the data gateway is blow 6 meters. They also demonstrate the energy efficiency of the system.", "title": "" }, { "docid": "0837c9af9b69367a5a6e32b2f72cef0a", "text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.", "title": "" }, { "docid": "5f5116c8e7324d55b180a65ba45fa1da", "text": "Some aspects of W3C's RDF Model and Syntax Specification require careful reading and interpretation to produce a conformant implementation. Issues have arisen around anonymous resources, reification and RDF Graphs. These and other issues are identified, discussed and an interpretation of each is proposed. Jena, an RDF API in Java based on this interpretation, is described.", "title": "" }, { "docid": "07457116fbecf8e5182459961b8a87d0", "text": "Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.", "title": "" }, { "docid": "7cf2c2ce9edff28880bc399e642cee44", "text": "This paper provides new results and insights for tracking an extended target object modeled with an Elliptic Random Hypersurface Model (RHM). An Elliptic RHM specifies the relative squared Mahalanobis distance of a measurement source to the center of the target object by means of a one-dimensional random scaling factor. It is shown that uniformly distributed measurement sources on an ellipse lead to a uniformly distributed squared scaling factor. Furthermore, a Bayesian inference mechanisms tailored to elliptic shapes is introduced, which is also suitable for scenarios with high measurement noise. Closed-form expressions for the measurement update in case of Gaussian and uniformly distributed squared scaling factors are derived.", "title": "" }, { "docid": "3797ca0ca77e51b2e77a1f46665edeb8", "text": "This paper proposes a new method for the Karmed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a sharp finite-time regret bound of order O(K log T ) on a very general class of dueling bandit problems that matches a lower bound proven in (Yue et al., 2012). In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.", "title": "" }, { "docid": "ed769b97bea6d4bbe7e282ad6dbb1c67", "text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given", "title": "" }, { "docid": "5c7bae1ad8c055449fbdeca6f8a828a8", "text": "In this paper, we address the problem of discovering topically meaningful, yet compact (densely connected) communities in a social network. Assuming the social network to be an integer-weighted graph (where the weights can be intuitively defined as the number of common friends, followers, documents exchanged, etc.), we transform the social network to a more efficient representation. In this new representation, each user is a bag of her one-hop neighbors. We propose a mixed-membership model to identify compact communities using this transformation. Next, we augment the representation and the model to incorporate user-content information imposing topical consistency in the communities. In our model a user can belong to multiple communities and a community can participate in multiple topics. This allows us to discover community memberships as well as community and user interests. Our method outperforms other well known baselines on two real-world social networks. Finally, we also provide a fast, parallel approximation of the same.", "title": "" }, { "docid": "8cfeb661397d6716ca7fa9954de81330", "text": "There has been a great amount of work on query-independent summarization of documents. However, due to the success of Web search engines query-specific document summarization (query result snippets) has become an important problem, which has received little attention. We present a method to create query-specific summaries by identifying the most query-relevant fragments and combining them using the semantic associations within the document. In particular, we first add structure to the documents in the preprocessing stage and convert them to document graphs. Then, the best summaries are computed by calculating the top spanning trees on the document graphs. We present and experimentally evaluate efficient algorithms that support computing summaries in interactive time. Furthermore, the quality of our summarization method is compared to current approaches using a user survey.", "title": "" }, { "docid": "2f516ad4e861983730d75cd649fb49c3", "text": "Enhanced flexibility in optical transport networks is a key requirement to support dynamic traffic load in packet-based networks. Today, flexibility is achieved by packet switches linked by static point-to-point transport connections. Wide-stretched synchronization patterns, line coding schemes, and forward error correction (FEC) frames prohibit flexibility right at the transport layer. We introduce a new optical transport concept that combines packet aggregation with a multipoint-to-point line coding and FEC processing. This concept avoids the quadratic full mesh scalability problem of other aggregated switching technologies such as, e.g., wavelength switching. It combines the flexibility of a distributed Ethernet switch and the performance of a leading edge optical transport system.", "title": "" }, { "docid": "6f18824025174e4fcfe0abb96d0a2779", "text": "The efficiency of phosphatases produced by clover, barley, oats and wheat was investigated in soils treated with sodium glycerophosphate, lecithin and phytin. Root exudates of aseptically grown clover were also examined for the breakdown of different organic P compounds in order to test the efficiency of plant-produced phosphatases. In general, the plants were able to use P from all the organic sources used in the study almost as efficiently as inorganic sources. Dry-matter yield, P uptake, acid and alkaline phosphatase activity and microbial population were increased in all the P treatments. Organic P enhanced alkaline phosphatase activity. Lecithin increased fungal, and phytin bacterial growth. There was no alkaline phosphatase activity in the asepticallly grown clover root exudates. Phosphatase released in aseptic culture after 4 weeks of clover growth was able to efficiently hydrolyse sodium glycerophosphate, lecithin and phytin. The amount of organic P hydrolysed in this and in the soil experiment surpassed plant uptake by a factor of 20. This suggests that the limiting factor on plant utilization of organic P is the availability of hydrolysable organic P sources.", "title": "" }, { "docid": "0ee23e7086c287bd52fbb0bb6be2039d", "text": "Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas of mathematics, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants. We argue that such scenarios will benefit from Semantic Web technology. Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge. Mathematical knowledge comprises structures given in a logical language – formulae, statements (e.g. axioms), and theories –, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and (counter-)examples. Our review of vocabularies for representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics. Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.", "title": "" }, { "docid": "1272563e64ca327aba1be96f2e045c30", "text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.", "title": "" }, { "docid": "360a2da8e6dcc35e3c68773f4278c084", "text": "Though dialectal language is increasingly abundant on social media, few resources exist for developing NLP tools to handle such language. We conduct a case study of dialectal language in online conversational text by investigating African-American English (AAE) on Twitter. We propose a distantly supervised model to identify AAE-like language from demographics associated with geo-located messages, and we verify that this language follows well-known AAE linguistic phenomena. In addition, we analyze the quality of existing language identification and dependency parsing tools on AAE-like text, demonstrating that they perform poorly on such text compared to text associated with white speakers. We also provide an ensemble classifier for language identification which eliminates this disparity and release a new corpus of tweets containing AAE-like language. Data and software resources are available at: http://slanglab.cs.umass.edu/TwitterAAE (This is an expanded version of our EMNLP 2016 paper, including the appendix at end.)", "title": "" }, { "docid": "37ccaaf82bd001e48ef1d4a2651a5700", "text": "In a wireless network with a single source and a single destination and an arbitrary number of relay nodes, what is the maximum rate of information flow achievable? We make progress on this long standing problem through a two-step approach. First, we propose a deterministic channel model which captures the key wireless properties of signal strength, broadcast and superposition. We obtain an exact characterization of the capacity of a network with nodes connected by such deterministic channels. This result is a natural generalization of the celebrated max-flow min-cut theorem for wired networks. Second, we use the insights obtained from the deterministic analysis to design a new quantize-map-and-forward scheme for Gaussian networks. In this scheme, each relay quantizes the received signal at the noise level and maps it to a random Gaussian codeword for forwarding, and the final destination decodes the source's message based on the received signal. We show that, in contrast to existing schemes, this scheme can achieve the cut-set upper bound to within a gap which is independent of the channel parameters. In the case of the relay channel with a single relay as well as the two-relay Gaussian diamond network, the gap is 1 bit/s/Hz. Moreover, the scheme is universal in the sense that the relays need no knowledge of the values of the channel parameters to (approximately) achieve the rate supportable by the network. We also present extensions of the results to multicast networks, half-duplex networks, and ergodic networks.", "title": "" }, { "docid": "c50955a729e4320d550dffe9422f689d", "text": "A unity-gain buffer has been fabricated in 0.35-mum CMOS technology. The circuit uses feed forward and local feedback in a cascaded source follower circuit as well as two global feedback loops: one to reduce the output resistance, gain error, and offset and a second loop to further reduce gain error. The buffer consumes 3.7 mW at 3.3 V and has a bandwidth of 92 MHz when driving a 13-pF capacitive load", "title": "" }, { "docid": "16b5c5d176f2c9292d9c9238769bab31", "text": "We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.", "title": "" }, { "docid": "a1907a3772a2754343c84b0e4479e4a4", "text": "The paper presents a genetic algorithm based design approach of the robotic arm trajectory control with the optimization of various criterions. The described methodology is based on the inverse kinematics problem and it additionally considers the minimization of the operating-time, and/or the minimization of energy consumption as well as the minimization of the sum of all rotation changes during the operation cycle. Each criterion evaluation includes the computationally demanding simulation of the arm movement. The proposed approach was verified and all the proposed criterions have been compared on the trajectory optimization of the industrial robot ABB IRB 6400FHD, which has six degrees of freedom.", "title": "" } ]
scidocsrr
095a502816fc15d16fdb577f7ef01acc
Social Networks Under Stress
[ { "docid": "b14da8072f10692ccc325976681b09fd", "text": "Researchers increasingly use electronic communication data to construct and study large social networks, effectively inferring unobserved ties (e.g. i is connected to j) from observed communication events (e.g. i emails j). Often overlooked, however, is the impact of tie definition on the corresponding network, and in turn the relevance of the inferred network to the research question of interest. Here we study the problem of network inference and relevance for two email data sets of different size and origin. In each case, we generate a family of networks parameterized by a threshold condition on the frequency of emails exchanged between pairs of individuals. After demonstrating that different choices of the threshold correspond to dramatically different network structures, we then formulate the relevance of these networks in terms of a series of prediction tasks that depend on various network features. In general, we find: a) that prediction accuracy is maximized over a non-trivial range of thresholds corresponding to 5-10 reciprocated emails per year; b) that for any prediction task, choosing the optimal value of the threshold yields a sizable (~30%) boost in accuracy over naive choices; and c) that the optimal threshold value appears to be (somewhat surprisingly) consistent across data sets and prediction tasks. We emphasize the practical utility in defining ties via their relevance to the prediction task(s) at hand and discuss implications of our empirical results.", "title": "" }, { "docid": "bb999acceac5f0bc1f21879529746546", "text": "How do real graphs evolve over time? What are normal growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.\n Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).\n Existing graph generation models do not exhibit these types of behavior even at a qualitative level. We provide a new graph generator, based on a forest fire spreading process that has a simple, intuitive justification, requires very few parameters (like the flammability of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.\n We also notice that the forest fire model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point.\n Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of relation between densification and the degree distribution.", "title": "" } ]
[ { "docid": "e7a260bfb238d8b4f147ac9c2a029d1d", "text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.", "title": "" }, { "docid": "f2a6a741e1807e1d2d3e7686536a7cde", "text": "We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz ) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.", "title": "" }, { "docid": "4c2b13b00ce3c92762fa9bfbd34dd0a0", "text": "Technology advances in the areas of Image processing IP and Information Retrieval IR have evolved separately for a long time However successful content based image retrieval systems require the integration of the two There is an urgent need to develop integration mechanisms to link the image retrieval model to text retrieval model such that the well established text retrieval techniques can be utilized Approaches of converting image feature vectors IP do main to weighted term vectors IR domain are proposed in this paper Furthermore the relevance feedback technique from the IR domain is used in content based image retrieval to demonstrate the e ectiveness of this conversion Exper imental results show that the image retrieval precision in creases considerably by using the proposed integration ap proach", "title": "" }, { "docid": "2bd8a66a3e3cfafc9b13fd7ec47e86fc", "text": "Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes.", "title": "" }, { "docid": "5ebdda11fbba5d0633a86f2f52c7a242", "text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.", "title": "" }, { "docid": "98669391168e56c407b1dc3756348a00", "text": "This study assessed the relation between non-native subjects' age of learning (AOL) English and the overall degree of perceived foreign accent in their production of English sentences. The 240 native Italian (NI) subjects examined had begun learning English in Canada between the ages of 2 and 23 yr, and had lived in Canada for an average of 32 yr. Native English-speaking listeners used a continuous scale to rate sentences spoken by the NI subjects and by subjects in a native English comparison group. Estimates of the AOL of onset of foreign accents varied across the ten listeners who rated the sentences, ranging from 3.1 to 11.6 yr (M = 7.4). Foreign accents were evident in sentences spoken by many NI subjects who had begun learning English long before what is traditionally considered to be the end of a critical period. Very few NI subjects who began learning English after the age of 15 yr received ratings that fell within the native English range. Principal components analyses of the NI subjects' responses to a language background questionnaire were followed by multiple-regression analyses. AOL accounted for an average of 59% of variance in the foreign accent ratings. Language use factors accounted for an additional 15% of variance. Gender was also found to influence degree of foreign accent.", "title": "" }, { "docid": "6db6819627305ab61c2e5d8de70e9c2e", "text": "Purpose – The purpose of this paper is to critically assess current developments in the theory and practice of supply management and through such an assessment to identify barriers, possibilities and key trends. Design/methodology/approach – The paper is based on a three-year detailed study of six supply chains which encompassed 72 companies in Europe. The focal firms in each instance were sophisticated, blue-chip corporations operating on an international scale. Managers across at least four echelons of the supply chain were interviewed and the supply chains were traced and observed. Findings – The paper reveals that supply management is, at best, still emergent in terms of both theory and practice. Few practitioners were able – or even seriously aspired – to extend their reach across the supply chain in the manner prescribed in much modern theory. The paper identifies the range of key barriers and enablers to supply management and it concludes with an assessment of the main trends. Research limitations/implications – The research presents a number of challenges to existing thinking about supply strategy and supply chain management. It reveals the substantial gaps between theory and practice. A number of trends are identified which it is argued may work in favour of better prospects for SCM in the future and for the future of supply management as a discipline. Practical implications – A central challenge concerns who could or should manage the supply chain. Barriers to effective supply management are identified and some practical steps to surmount them are suggested. Originality/value – The paper is original in the way in which it draws on an extensive systematic study to critically assess current theory and current developments. The paper points the way for theorists and practitioners to meet future challenges.", "title": "" }, { "docid": "273bf17fa1e6ad901a1bf7dbb540ba76", "text": "BAHARAV, AND ARIEH BORUT. Running in cheetahs, gazelles, and goats: energy cost and limb conjguration. Am. J. Physiol. 227(4) : 848-850. 1974.-Functional anatomists have argued that an animal can be built to run cheaply by lightening the distal parts of the limbs and/or by concentrating the muscle mass of the limbs around their pivot points. These arguments assume .that much of the energy expended as animals run at a constant speed goes into alternately accelerating and decelerating the limbs. Gazelles, goats, and cheetahs offer a nice gradation of limb configurations in animals of similar total mass and limb length and, therefore, provide the opportunity to quantify the effect of limb design on the energy cost of running. We found that, despite large differences in limb configuration, the energetic cost of running in cheetahs, gazelles, and goats of about the same mass was nearly identical over a wide range of speeds. Also, the observed energetic cost of running was almost the same as that predicted on the basis of body weight for all three species: cheetah, 0.14 ml 02 (g l km)-’ observed vs. 0.13 ml 02 (g *km)-l predicted; gazelle, 0.16 ml 02 (g *km)-’ observed vs. 0.15 ml 02 (g *km)-’ predicted; and goat, 0.18 ml 02 (g . km)-’ observed vs. 0.14 ml 02 (g *km)-’ predicted. Thus the relationship between body weight and energetic cost of running apparently applies to animals with very different limb configurations and is more general than anticipated. This suggests that most of the energy expended in running at a constant speed is not used to accelerate and decelerate the limbs.", "title": "" }, { "docid": "1451d5d8729c2e78c8c97e53c44f71a0", "text": "Inflammation plays a key role in the progression of cardiovascular disease, the leading cause of mortality in ESRD (end-stage renal disease). Over recent years, inflammation has been greatly reduced with treatment, but mortality remains high. The aim of the present study was to assess whether low (<2 pg/ml) circulating levels of IL-6 (interleukin-6) are necessary and sufficient to activate the transcription factor STAT3 (signal transducer and activator of transcription 3) in human hepatocytes, and if this micro-inflammatory state was associated with changes in gene expression of some acute-phase proteins involved in cardiovascular mortality in ESRD. Human hepatocytes were treated for 24 h in the presence and absence of serum fractions from ESRD patients and healthy subjects with different concentrations of IL-6. The specific role of the cytokine was also evaluated by cell experiments with serum containing blocked IL-6. Furthermore, a comparison of the effects of IL-6 from patient serum and rIL-6 (recombinant IL-6) at increasing concentrations was performed. Confocal microscopy and Western blotting demonstrated that STAT3 activation was associated with IL-6 cell-membrane-bound receptor overexpression only in hepatocytes cultured with 1.8 pg/ml serum IL-6. A linear activation of STAT3 and IL-6 receptor expression was also observed after incubation with rIL-6. Treatment of hepatocytes with 1.8 pg/ml serum IL-6 was also associated with a 31.6-fold up-regulation of hepcidin gene expression and a 8.9-fold down-regulation of fetuin-A gene expression. In conclusion, these results demonstrated that low (<2 pg/ml) circulating levels of IL-6, as present in non-inflamed ESRD patients, are sufficient to activate some inflammatory pathways and can differentially regulate hepcidin and fetuin-A gene expression.", "title": "" }, { "docid": "ce1f67735cfa0e68246e92c53072155f", "text": "Event and relation extraction are central tasks in biomedical text mining. Where relation extraction concerns the detection of semantic connections between pairs of entities, event extraction expands this concept with the addition of trigger words, multiple arguments and nested events, in order to more accurately model the diversity of natural language. In this work we develop a convolutional neural network that can be used for both event and relation extraction. We use a linear representation of the input text, where information is encoded with various vector space embeddings. Most notably, we encode the parse graph into this linear space using dependency path embeddings. We integrate our neural network into the open source Turku Event Extraction System (TEES) framework. Using this system, our machine learning model can be easily applied to a large set of corpora from e.g. the BioNLP, DDI Extraction and BioCreative shared tasks. We evaluate our system on 12 different event, relation and NER corpora, showing good generalizability to many tasks and achieving improved performance on several corpora.", "title": "" }, { "docid": "521bab3f363637e0b8d8d8a830816c9b", "text": "We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing newsbased datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-ofthe-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.", "title": "" }, { "docid": "d37ae8e8fe4d1b5fe67021b354d757c9", "text": "A method for determining the aesthetically proportioned nasal length, tip projection, and radix projection in any given face is described. The proportioned nasal length is two-thirds (0.67) the midfacial height and exactly equal to chin vertical. Tip projection is two-thirds (0.67) the surgically planned or ideal nasal length. Radix projection, measured from the junction of the nasal bones with the orbit, is one-third (0.33) the ideal nasal length. The preferred clinical reference for measuring radix projection is the plane of the corneal surface; the radix projects 0.28 times the ideal nasal length from this surface (range: 9-14 mm). These-dimensional relationships were confirmed from direct clinical measurements taken from 87 models and subsequently applied in 126 consecutive rhinoplasties. The significance of this dimensional approach to rhinoplasty lies in the fact that planned nasal dimensions are based on facial measurements that allow the nose to vary in size directly with the face. Furthermore, it removes the dorsum as the primary focus in dimensional assessment. Rather, the dorsal prominence may be consistently described relative to a plane connecting the \"ideal\" radix and tip.", "title": "" }, { "docid": "1e4d9d451b3713c9a06a7b0b8cb4e471", "text": "The Web 3.0 is approaching fast and the Online Social Networks (OSNs) are becoming more and more pervasive in today daily activities. A subsequent consequence is that criminals are running at the same speed as technology and most of the time highly sophisticated technological machineries are used by them. Images are often involved in illicit or illegal activities, with it now being fundamental to try to ascertain as much as information on a given image as possible. Today, most of the images coming from the Internet flow through OSNs. The paper analyzes the characteristics of images published on some OSNs. The analysis mainly focuses on how the OSN processes the uploaded images and what changes are made to some of the characteristics, such as JPEG quantization table, pixel resolution and related metadata. The experimental analysis was carried out in June-July 2011 on Facebook, Badoo and Google+. It also has a forensic value: it can be used to establish whether an image has been downloaded from an OSN or not.", "title": "" }, { "docid": "36bdb668b97c77496cdb66c045c58495", "text": "OBJECTIVE\nThe purpose of the present study was to examine the prevalence of fast-food purchases for family meals and the associations with sociodemographic variables, dietary intake, home food environment, and weight status in adolescents and their parents.\n\n\nDESIGN\nThis study is a cross-sectional evaluation of parent interviews and adolescent surveys from Project EAT (Eating Among Teens).\n\n\nSUBJECTS\nSubjects included 902 middle-school and high-school adolescents (53% female, 47% male) and their parents (89% female, 11% male). The adolescent population was ethnically diverse: 29% white, 24% black, 21% Asian American, 14% Hispanic and 12% other.\n\n\nRESULTS\nResults showed that parents who reported purchasing fast food for family meals at least 3 times per week were significantly more likely than parents who reported purchasing fewer fast-food family meals to report the availability of soda pop and chips in the home. Adolescents in homes with fewer than 3 fast-food family meals per week were significantly more likely than adolescents in homes with more fast-food family meals to report having vegetables and milk served with meals at home. Fast-food purchases for family meals were positively associated with the intake of fast foods and salty snack foods for both parents and adolescents; and weight status among parents. Fast-food purchases for family meals were negatively associated with parental vegetable intake.\n\n\nCONCLUSIONS\nFast-food purchases may be helpful for busy families, but families need to be educated on the effects of fast food for family meals and how to choose healthier, convenient family meals.", "title": "" }, { "docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef", "text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: rmfelder@mindspring.com The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?", "title": "" }, { "docid": "7593c8e9eb1520f65d7780efbbcedd7d", "text": "We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.", "title": "" }, { "docid": "154e25caf9eb954bb7658304dd37a8a2", "text": "RFID is an automatic identification technology that enables tracking of people and objects. Both identity and location are generally key information for indoor services. An obvious and interesting method to obtain these two types of data is to localize RFID tags attached to devices or objects or carried by people. However, signals in indoor environments are generally harshly impaired and tags have very limited capabilities which pose many challenges for positioning them. In this work, we propose a classification and survey the current state-of-art of RFID localization by first presenting this technology and positioning principles. Then, we explain and classify RFID localization techniques. Finally, we discuss future trends in this domain.", "title": "" }, { "docid": "92dca681aa54142d24e3b7bf1854a2d2", "text": "Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous space. The performance of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks.", "title": "" }, { "docid": "2a6ccc94e3f3a9beb2992da1e225b720", "text": "This paper proposes a new design of SPOKE-type PM brushless direct current (BLDC) motor without using neodymium PM (Nd-PM). The proposed model has an improved output characteristic as it uses the properties of the magnetic flux effect of the SPOKE-type motor with an additional pushing assistant magnet and subassistant magnet in the shape of spoke. In this paper, ferrite PM (Fe-PM) is used instead of Nd-PM. First, the air-gap flux density and backelectromotive force (BEMF) are obtained based on the field model. Second, the analytical expressions of magnet field strength and magnet flux density are obtained in the air gap produced by Fe-PM. The developed analytical model is obtained by solving the magnetic scalar potential. Finally, the air-gap field distribution and BEMF of SPOKE-type motor are analyzed. The analysis works for internal rotor motor topologies. This paper validates results of the analytical model by finite-element analysis for wing-shaped SPOKE-type BLDC motors.", "title": "" }, { "docid": "160d488f12fa1db16756df36c649a76a", "text": "Cutaneous metastases are a rare event, representing 0.7% to 2.0% of all cutaneous malignant neoplasms. They may be the first sign of a previously undiagnosed visceral malignancy or the initial presentation of a recurrent neoplasm. The frequency of cutaneous metastases according to the type of underlying malignancies varies with sex. In men, the most common internal malignancies leading to cutaneous metastases are lung cancer, colon cancer, melanoma, squamous cell carcinoma of the oral cavity, and renal cell carcinoma. In women, breast cancer, colon cancer, melanoma, lung cancer, and ovarian cancer are the most common malignancies leading to cutaneous metastases.", "title": "" } ]
scidocsrr
88dd387422b83e6765347f92f1ddf59a
L0 sparse graphical modeling
[ { "docid": "2871de581ee0efe242438567ca3a57dd", "text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.", "title": "" } ]
[ { "docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f", "text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.", "title": "" }, { "docid": "afa8b1315f051fa6f683f63d58fcc3d4", "text": "Our opinions and judgments are increasingly shaped by what we read on social media -- whether they be tweets and posts in social networks, blog posts, or review boards. These opinions could be about topics such as consumer products, politics, life style, or celebrities. Understanding how users in a network update opinions based on their neighbor's opinions, as well as what global opinion structure is implied when users iteratively update opinions, is important in the context of viral marketing and information dissemination, as well as targeting messages to users in the network.\n In this paper, we consider the problem of modeling how users update opinions based on their neighbors' opinions. We perform a set of online user studies based on the celebrated conformity experiments of Asch [1]. Our experiments are carefully crafted to derive quantitative insights into developing a model for opinion updates (as opposed to deriving psychological insights). We show that existing and widely studied theoretical models do not explain the entire gamut of experimental observations we make. This leads us to posit a new, nuanced model that we term the BVM. We present preliminary theoretical and simulation results on the convergence and structure of opinions in the entire network when users iteratively update their respective opinions according to the BVM. We show that consensus and polarization of opinions arise naturally in this model under easy to interpret initial conditions on the network.", "title": "" }, { "docid": "74b1a39f88ccce2c2f865f36e5117b51", "text": "Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bidirectional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the input. The experiment shows that our model outperforms common neural network models (CNN, RNN, Bi-RNN) on a sentiment analysis task. Besides, the analysis of how sequence length influences the RCNN with highway layers shows that our model could learn good representation for the long text.", "title": "" }, { "docid": "c53021193518ebdd7006609463bafbcc", "text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.", "title": "" }, { "docid": "14ba4e49e1f773c8f7bfadf8f08a967e", "text": "Mounting evidence suggests that acute and chronic stress, especially the stress-induced release of glucocorticoids, induces changes in glutamate neurotransmission in the prefrontal cortex and the hippocampus, thereby influencing some aspects of cognitive processing. In addition, dysfunction of glutamatergic neurotransmission is increasingly considered to be a core feature of stress-related mental illnesses. Recent studies have shed light on the mechanisms by which stress and glucocorticoids affect glutamate transmission, including effects on glutamate release, glutamate receptors and glutamate clearance and metabolism. This new understanding provides insights into normal brain functioning, as well as the pathophysiology and potential new treatments of stress-related neuropsychiatric disorders.", "title": "" }, { "docid": "6e666fdd26ea00a6eebf7359bdf82329", "text": "Kernel-level attacks or rootkits can compromise the security of an operating system by executing with the privilege of the kernel. Current approaches use virtualization to gain higher privilege over these attacks, and isolate security tools from the untrusted guest VM by moving them out and placing them in a separate trusted VM. Although out-of-VM isolation can help ensure security, the added overhead of world-switches between the guest VMs for each invocation of the monitor makes this approach unsuitable for many applications, especially fine-grained monitoring. In this paper, we present Secure In-VM Monitoring (SIM), a general-purpose framework that enables security monitoring applications to be placed back in the untrusted guest VM for efficiency without sacrificing the security guarantees provided by running them outside of the VM. We utilize contemporary hardware memory protection and hardware virtualization features available in recent processors to create a hypervisor protected address space where a monitor can execute and access data in native speeds and to which execution is transferred in a controlled manner that does not require hypervisor involvement. We have developed a prototype into KVM utilizing Intel VT hardware virtualization technology. We have also developed two representative applications for the Windows OS that monitor system calls and process creations. Our microbenchmarks show at least 10 times performance improvement in invocation of a monitor inside SIM over a monitor residing in another trusted VM. With a systematic security analysis of SIM against a number of possible threats, we show that SIM provides at least the same security guarantees as what can be achieved by out-of-VM monitors.", "title": "" }, { "docid": "a2346bc58039ef6f5eb710804e87359d", "text": "This work presents a deep object co-segmentation (DOCS) approach for segmenting common objects of the same class within a pair of images. This means that the method learns to ignore common, or uncommon, background stuff and focuses on common objects. If multiple object classes are presented in the image pair, they are jointly extracted as foreground. To address this task, we propose a CNN-based Siamese encoder-decoder architecture. The encoder extracts high-level semantic features of the foreground objects, a mutual correlation layer detects the common objects, and finally, the decoder generates the output foreground masks for each image. To train our model, we compile a large object co-segmentation dataset consisting of image pairs from the PASCAL dataset with common objects masks. We evaluate our approach on commonly used datasets for co-segmentation tasks and observe that our approach consistently outperforms competing methods, for both seen and unseen object classes.", "title": "" }, { "docid": "69c8584255b16e6bc05fdfc6510d0dc4", "text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.", "title": "" }, { "docid": "13d1b0637c12d617702b4f80fd7874ef", "text": "Linear-time algorithms for testing the planarity of a graph are well known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a conceptually simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. This implies a linear-time planarity test. Our approach is radically different from all previous linear-time planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise.", "title": "" }, { "docid": "36b232e486ee4c9885a51a1aefc8f12b", "text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.", "title": "" }, { "docid": "67265d70b2d704c0ab2898c933776dc2", "text": "The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality.", "title": "" }, { "docid": "a6f3be6fa5459a927fdbc455a4a081e2", "text": "Crowdsourcing, simply referring to the act of outsourcing a task to the crowd, is one of the most important trends revolutionizing the internet and the mobile market at present. This paper is an attempt to understand the dynamic and innovative discipline of crowdsourcing by developing a critical success factor model for it. The critical success factor model is based on the case study analysis of the mobile phone based crowdsourcing initiatives in Africa and the available literature on outsourcing, crowdsourcing and technology adoption. The model is used to analyze and hint at some of the critical attributes of a successful crowdsourcing initiative focused on socio-economic development of societies. The broader aim of the paper is to provide academicians, social entrepreneurs, policy makers and other practitioners with a set of recommended actions and an overview of the important considerations to be kept in mind while implementing a crowdsourcing initiative.", "title": "" }, { "docid": "3ef36b8675faf131da6cbc4d94f0067e", "text": "The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.", "title": "" }, { "docid": "dbcfb877dae759f9ad1e451998d8df38", "text": "Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "e982954841e753aa0dd4f66fe2eb4f7a", "text": "Background. Observational studies suggest that people who consume more fruits and vegetables containing beta carotene have somewhat lower risks of cancer and cardiovascular disease, and earlier basic research suggested plausible mechanisms. Because large randomized trials of long duration were necessary to test this hypothesis directly, we conducted a trial of beta carotene supplementation. Methods. In a randomized, double-blind, placebo-controlled trial of beta carotene (50 mg on alternate days), we enrolled 22,071 male physicians, 40 to 84 years of age, in the United States; 11 percent were current smokers and 39 percent were former smokers at the beginning of the study in 1982. By December 31, 1995, the scheduled end of the study, fewer than 1 percent had been lost to followup, and compliance was 78 percent in the group that received beta carotene. Results. Among 11,036 physicians randomly assigned to receive beta carotene and 11,035 assigned to receive placebo, there were virtually no early or late differences in the overall incidence of malignant neoplasms or cardiovascular disease, or in overall mortality. In the beta carotene group, 1273 men had any malignant neoplasm (except nonmelanoma skin cancer), as compared with 1293 in the placebo group (relative risk, 0.98; 95 percent confidence interval, 0.91 to 1.06). There were also no significant differences in the number of cases of lung cancer (82 in the beta carotene group vs. 88 in the placebo group); the number of deaths from cancer (386 vs. 380), deaths from any cause (979 vs. 968), or deaths from cardiovascular disease (338 vs. 313); the number of men with myocardial infarction (468 vs. 489); the number with stroke (367 vs. 382); or the number with any one of the previous three end points (967 vs. 972). Among current and former smokers, there were also no significant early or late differences in any of these end points. Conclusions. In this trial among healthy men, 12 years of supplementation with beta carotene produced neither benefit nor harm in terms of the incidence of malignant neoplasms, cardiovascular disease, or death from all causes. (N Engl J Med 1996;334:1145-9.)  1996, Massachusetts Medical Society. From the Divisions of Preventive Medicine (C.H.H., J.E.B., J.E.M., N.R.C., C.B., F.L., J.M.G., P.M.R.) and Cardiovascular Medicine (J.M.G., P.M.R.) and the Channing Laboratory (M.S., B.R., W.W.), Department of Medicine, Brigham and Women’s Hospital; the Department of Ambulatory Care and Prevention, Harvard Medical School (C.H.H., J.E.B., N.R.C.); and the Departments of Epidemiology (C.H.H., J.E.B., M.S., W.W.), Biostatistics (B.R.), and Nutrition (M.S., W.W.), Harvard School of Public Health — all in Boston; and the Imperial Cancer Research Fund Clinical Trial Service Unit, University of Oxford, Oxford, England (R.P.). Address reprint requests to Dr. Hennekens at 900 Commonwealth Ave. E., Boston, MA 02215. Supported by grants (CA-34944, CA-40360, HL-26490, and HL-34595) from the National Institutes of Health. O BSERVATIONAL epidemiologic studies suggest that people who consume higher dietary levels of fruits and vegetables containing beta carotene have a lower risk of certain types of cancer 1,2 and cardiovascular disease, 3 and basic research suggests plausible mechanisms. 4-6 It is difficult to determine from observational studies, however, whether the apparent benefits are due to beta carotene itself, other nutrients in beta carotene– rich foods, other dietary habits, or other, nondietary lifestyle characteristics. 7 Long-term, large, randomized trials can provide a direct test of the efficacy of beta carotene in the prevention of cancer or cardiovascular disease. 8 For cancer, such trials should ideally last longer than the latency period (at least 5 to 10 years) of many types of cancer. A trial lasting 10 or more years could allow a sufficient period of latency and an adequate number of cancers for the detection of even a small reduction in overall risk due to supplementation with beta carotene. Two large, randomized, placebo-controlled trials in well-nourished populations (primarily cigarette smokers) have been reported. The Alpha-Tocopherol, Beta Carotene (ATBC) Cancer Prevention Study, a placebocontrolled trial, assigned 29,000 Finnish male smokers to receive beta carotene, vitamin E, both active agents, or neither, for an average of six years. 9 The BetaCarotene and Retinol Efficacy Trial (CARET) enrolled 18,000 men and women at high risk for lung cancer because of a history of cigarette smoking or occupational exposure to asbestos; this trial evaluated combined treatment with beta carotene and retinol for an average of less than four years. 10 Both studies found no benefits of such supplementation in terms of the incidence of Downloaded from www.nejm.org at UW MADISON on December 04, 2003. Copyright © 1996 Massachusetts Medical Society. All rights reserved. 1146 THE NEW ENGLAND JOURNAL OF MEDICINE May 2, 1996 cancer or cardiovascular disease; indeed, both found somewhat higher rates of lung cancer and cardiovascular disease among subjects given beta carotene. The estimated excess risks were small, and it remains unclear whether beta carotene was truly harmful. Moreover, since the duration of these studies was relatively short, they leave open the possibility that benefit, especially in terms of cancer, would become evident with longer treatment and follow-up. 11 In this report, we describe the findings of the beta carotene component of the Physicians’ Health Study, a randomized trial in which 22,071 U.S. male physicians were treated and followed for an average of 12 years.", "title": "" }, { "docid": "1d44e13375e1b647fed4dbf661d80ec4", "text": "Designing and implementing efficient, provably correct parallel neural network processing is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. However, the diversity and large-scale data size have posed a significant challenge to construct a flexible and high-performance implementation of deep learning neural networks. To improve the performance and maintain the scalability, we present CNNLab, a novel deep learning framework using GPU and FPGA-based accelerators. CNNLab provides a uniform programming model to users so that the hardware implementation and the scheduling are invisible to the programmers. At runtime, CNNLab leverages the trade-offs between GPU and FPGA before offloading the tasks to the accelerators. Experimental results on the state-of-the-art Nvidia K40 GPU and Altera DE5 FPGA board demonstrate that the CNNLab can provide a universal framework with efficient support for diverse applications without increasing the burden of the programmers. Moreover, we analyze the detailed quantitative performance, throughput, power, energy, and performance density for both approaches. Experimental results leverage the trade-offs between GPU and FPGA and provide useful practical experiences for the deep learning research community.", "title": "" }, { "docid": "2710b8b13436aae826f89f9b48fc02bd", "text": "The Winograd Schema Challenge is an alternative to the Turing Test that may provide a more meaningful measure of machine intelligence. It poses a set of coreference resolution problems that cannot be solved without human-like reasoning. In this paper, we take the view that the solution to such problems lies in establishing discourse coherence. Specifically, we examine two types of rhetorical relations that can be used to establish discourse coherence: positive and negative correlation. We introduce a framework for reasoning about correlation between sentences, and show how this framework can be used to justify solutions to some Winograd Schema problems.", "title": "" }, { "docid": "db1c084ddbe345fe3c8e400e295830c8", "text": "This article is a single-source introduction to the emerging concept of smart cities. It can be used for familiarizing researchers with the vast scope of research possible in this application domain. The smart city is primarily a concept, and there is still not a clear and consistent definition among practitioners and academia. As a simplistic explanation, a smart city is a place where traditional networks and services are made more flexible, efficient, and sustainable with the use of information, digital, and telecommunication technologies to improve the city's operations for the benefit of its inhabitants. Smart cities are greener, safer, faster, and friendlier. The different components of a smart city include smart infrastructure, smart transportation, smart energy, smart health care, and smart technology. These components are what make the cities smart and efficient. Information and communication technology (ICT) are enabling keys for transforming traditional cities into smart cities. Two closely related emerging technology frameworks, the Internet of Things (IoT) and big data (BD), make smart cities efficient and responsive. The technology has matured enough to allow smart cities to emerge. However, there is much needed in terms of physical infrastructure, a smart city, the digital technologies translate into better public services for inhabitants and better use of resources while reducing environmental impacts. One of the formal definitions of the smart city is the following: a city \"connecting the physical infrastructure, the information-technology infrastructure, the social infrastructure, and the business infrastructure to leverage the collective intelligence of the city\". Another formal and comprehensive definition is \"a smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operations and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects\". Any combination of various smart components can make cities smart. A city need not have all the components to be labeled as smart. The number of smart components depends on the cost and available technology.", "title": "" }, { "docid": "f7e5c139bc044683bd28840434212cf7", "text": "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.", "title": "" } ]
scidocsrr
c50a33a06e870f6576d66f07f9e91b8b
FBI-Pose: Towards Bridging the Gap between 2D Images and 3D Human Poses using Forward-or-Backward Information
[ { "docid": "d7d0fa6279b356d37c2f64197b3d721d", "text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "bf8cc3fc591758f4928568413a507530", "text": "We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors.", "title": "" }, { "docid": "f1deb9134639fb8407d27a350be5b154", "text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "title": "" } ]
[ { "docid": "a4f942680ec22233b88b993927c0a4ac", "text": "A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with several complex-valued vectors followed by (2) taking the absolute value of every entry of the resulting vectors followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as “data-driven multiscale windowed power spectra,” “data-driven multiscale windowed absolute spectra,” “data-driven multiwavelet absolute values,” or (in their most general configuration) “data-driven nonlinear multiwavelet packets.” Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (for example, logistic or tanh) nonlinearities, max. pooling, etc., do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). This note develops “data-driven multiscale windowed spectra” for certain stochastic processes that are common in the modeling of time series (such as audio) and natural images (including patterns and textures). We motivate the construction of such multiscale spectra in the form of “local averages of multiwavelet absolute values” or (in the most general configuration) “nonlinear multiwavelet packets” and connect these to certain “complex-valued convolutional networks.” A textbook treatment of all concepts and terms used above and below is given by [12]. Further information is available in the original work of [7], [15], [5], [4], [19], [16], [9], [20], and [18], for example. The work of [8], [13], [17], [2], and [3] also develops complex-valued convolutional networks (convnets). Renormalization group theory and its connection to convnets is discussed by [14]; this connection is incredibly insightful, though we leave further discussion to the cited work. Our exposition relies on nothing but the basic signal processing treated by [12]. For simplicity, we first limit consideration to the special case of a doubly infinite sequence of nonnegative random variables Xk, where k ranges over the integers. This input data will be the result of convolving an unmeasured independent and identically distributed (i.i.d.) sequence Zk, where k ranges over the integers, with an unknown sequence of real numbers fk, where k ranges over the integers (this latter sequence is known as a “filter,” whereas the i.i.d. sequence is known as “white noise”): Xj = ∞", "title": "" }, { "docid": "104148028f4d0e2775274ef7d2e8b2ed", "text": "Funneling and saltation are two major illusory feedback techniques for vibration-based tactile feedback. They are often put into practice e.g. to reduce the number of vibrators to be worn on the body and thereby build a less cumbersome feedback device. Recently, these techniques have been found to be applicable to eliciting \"out of the body\" experiences as well (e.g. through user-held external objects). This paper examines the possibility of applying this phenomenon to interacting with virtual objects. Two usability experiments were run to test the effects of funneling and saltation respectively for perceiving tactile sensation from a virtual object in an augmented reality setting. Experimental results have shown solid evidences for phantom sensations from virtual objects with funneling, but mixed results for saltation.", "title": "" }, { "docid": "d7711dac4c6c3f1aaed7f77228a2d99d", "text": "In today's teaching and learning approaches for first-semester students, practical courses more and more often complement traditional theoretical lectures. This practical element allows an early insight into the real world of engineering, augments student motivation, and enables students to acquire soft skills early. This paper describes a new freshman introduction course into practical engineering, which has been established within the Bachelor of Science curriculum of Electrical Engineering and Information Technology of RWTH Aachen University, Germany. The course is organized as an eight-day, full-time block laboratory for over 300 freshman students, who were supervised by more than 60 tutors from 23 institutes of the Electrical Engineering Department. Based on a threefold learning concept comprising mathematical methods, MATLAB programming, and practical engineering, the students were required to transfer mathematical basics to algorithms in MATLAB in order to control LEGO Mindstorms robots. Toward this end, a new toolbox, called the ¿RWTH-Mindstorms NXT Toolbox,¿ was developed, which enables the robots to be controlled remotely via MATLAB from a host computer. This paper describes how the laboratory course is organized and how it induces students to think as actual engineers would in solving real-world tasks with limited resources. Evaluation results show that the project improves the students' MATLAB programming skills, enhances motivation, and enables a peer learning process.", "title": "" }, { "docid": "74ea477258944c9da5a75dad5d7d9ccf", "text": "Usability is a main quality attribute for any interactive product. Usability in touch screen-based mobile devices is something essential and should be considered when launching a new product, it could be a distinguishing feature in a rushing market, as it is the one of the mobile devices nowadays. Traditional methods for usability measuring do not really fit the nature of these devices. There is a need for new usability evaluation methods or at least for the use of traditional evaluations in novel ways. A set of specific usability heuristics for touch screen-based mobile devices is proposed and (preliminary) validated.", "title": "" }, { "docid": "344be59c5bb605dec77e4d7bd105d899", "text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.", "title": "" }, { "docid": "878b5ea8bce77b0bcc07eb9cc5ee312f", "text": "This study aims to facilitate communication of deaf and dumb people by means of a data glove. The Turkish Sign Language translator glove is designed as a portable, low-cost and user-friendly system. The glove is equipped with flexible sensors to detect finger movements and a gyroscope to detect hand motion. The data from the sensors is analyzed with a microcontroller and 18 letters of the Turkish alphabet are successfully obtained. The Turkish Sign Language requires the use of both hands, however, this work aims to detect the entire alphabet with a single glove.", "title": "" }, { "docid": "20832ede6851f36d6a249e044c28892a", "text": "Mobile learning highly prioritizes the successful acquisition of context-aware contents from a learning server. A variant of 2D barcodes, the quick response (QR) code, which can be rapidly read using a PDA equipped with a camera and QR code reading software, is considered promising for context-aware applications. This work presents a novel QR code and handheld augmented reality (AR) supported mobile learning (m-learning) system: the handheld English language learning organization (HELLO). In the proposed English learning system, the linked information between context-aware materials and learning zones is defined in the QR codes. Each student follows the guide map displayed on the phone screen to visit learning zones and decrypt QR codes. The detected information is then sent to the learning server to request and receive context-aware learning material wirelessly. Additionally, a 3D animated virtual learning partner is embedded in the learning device based on AR technology, enabling students to complete their context-aware immersive learning. A case study and a survey conducted in a university demonstrate the effectiveness of the proposed m-learning system.", "title": "" }, { "docid": "ef7e973a5c6f9e722917a283a1f0fe52", "text": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour.", "title": "" }, { "docid": "51a750fcc6cff4e51095aa80ce25c7d2", "text": "We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.", "title": "" }, { "docid": "00ec1bd8c0a3d4a5b56e83bd7c7edd51", "text": "The fresh water polyp Hydra belongs to the phylum Cnidaria, which diverged from the metazoan lineage before the appearance of bilaterians. In order to understand the evolution of apoptosis in metazoans, we have begun to elucidate the molecular cell death machinery in this model organism. Based on ESTs and the whole Hydra genome assembly, we have identified 15 caspases. We show that one is activated during apoptosis, four have characteristics of initiator caspases with N-terminal DED, CARD or DD domain and two undergo autoprocessing in vitro. In addition, we describe seven Bcl-2-like and two Bak-like proteins. For most of the Bcl-2 family proteins, we have observed mitochondrial localization. When expressed in mammalian cells, HyBak-like 1 and 2 strongly induced apoptosis. Six of the Bcl-2 family members inhibited apoptosis induced by camptothecin in mammalian cells with HyBcl-2-like 4 showing an especially strong protective effect. This protein also interacted with HyBak-like 1 in a yeast two-hybrid assay. Mutation of the conserved leucine in its BH3 domain abolished both the interaction with HyBak-like 1 and the anti-apoptotic effect. Moreover, we describe novel Hydra BH-3-only proteins. One of these interacted with Bcl-2-like 4 and induced apoptosis in mammalian cells. Our data indicate that the evolution of a complex network for cell death regulation arose at the earliest and simplest level of multicellular organization, where it exhibited a substantially higher level of complexity than in the protostome model organisms Caenorhabditis and Drosophila.", "title": "" }, { "docid": "2b9b7b218e112447fa4cdd72085d3916", "text": "A 48-year-old female patient presented with gigantomastia. The sternal notch-nipple distance was 55 cm for the right breast and 50 cm for the left. Vertical mammaplasty based on the superior pedicle was performed. The resected tissue weighed 3400 g for the right breast and 2800 g for the left breast. The outcome was excellent with respect to symmetry, shape, size, residual scars, and sensitivity of the nipple-areola complex. Longer pedicles or larger resections were not found in the literature on vertical mammaplasty applications. In our opinion, by using the vertical mammaplasty technique in gigantomastia it is possible to achieve a well-projecting shape and preserve NAC sensitivity.", "title": "" }, { "docid": "ed78f7e674427e340152e372f7592af5", "text": "Mitochondria are the main source of ultra-weak chemiluminescence generated by reactive oxygen species, which are continuously formed during the mitochondrial oxidative metabolism. Vertebrate cells show typically filamentous mitochondria associated with the microtubules of the cytoskeleton, forming together a continuous network (mitochondrial reticulum). The refractive index of both mitochondria and microtubules is higher than the surrounding cytoplasm, which results that the mitochondrial reticulum can act as an optical waveguide, i.e. electromagnetic radiation can propagate within the network. A detailed analysis of the inner structure of mitochondria shows, that they can be optically modelled as a multi-layer system with alternating indices of refraction. The parameters of this multi-layer system are dependent on the physiologic state of the mitochondria. The effect of the multi-layer system on electromagnetic radiation propagating along the mitochondrial reticulum is analysed by the transfer-matrix method. If induced light emission could take place in mitochondria, the multi-layer system could lead to lasing action like it has been realized in technical distributed feedback laser. Based on former reports about the influence of external illumination on the physiology of mitochondria it is speculated whether there exists some kind of long-range interaction between individual mitochondria mediated by electromagnetic radiation.", "title": "" }, { "docid": "d48e3a276417392ae14c06f4fc7927ac", "text": "A recently discovered Early Cretaceous (early late Albian) dinosaur tracksite at Parede beach (Cascais, Portugal) reveals evidence of dinoturbation and at least two sauropod trackways. One of these trackways can be classified as narrow-gauge, which represents unique evidence in the Albian of the Iberian Peninsula and provides for the improvement of knowledge of this kind of trackway and its probable trackmaker, in an age when the sauropod record is scarce. These dinosaur tracks are preserved on the upper surface of a marly limestone bed that belongs to the Galé Formation (Água Doce Member, middle to lower upper Albian). The study of thin-sections of the beds C22/24 and C26 in the Parede section has revealed a microfacies composed of foraminifers, radiolarians, ostracods, corals, bivalves, gastropods, and echinoids in a mainly wackestone texture with biomicritic matrix. These assemblages match with the lithofacies, marine molluscs, echinids, and ichnofossils sampled from the section and indicate a shallow marine, inner shelf palaeoenvironment with a shallowing-upward trend. The biofacies and the sequence analysis are compatible with the early late Albian age attributed to the tracksite. These tracks and the moderate dinoturbation index indicate sauropod activity in this palaeoenvironment. Titanosaurs can be dismissed as possible trackmakers on the basis of the narrow-gauge trackway, and probably by the kidney-shaped manus morphology and the pes-dominated configuration of the trackway. Narrow-gauge sauropod trackways have been positively associated with coastal palaeoenvironments, and the Parede tracksite supports this interpretation. In addition, this tracksite adds new data about the presence of sauropod pes-dominated trackways in cohesive substrates. As the Portuguese Cretaceous sauropod osteological remains are very scarce, the Parede tracksite yields new and relevant evidence of these dinosaurs. Furthermore, the Parede tracksite is the youngest evidence of sauropods in the Portuguese record and some of the rare evidence of sauropods in Europe during the Albian. This discovery enhances the palaeobiological data for the Early Cretaceous Sauropoda of the Iberian Peninsula, where the osteological remains of these dinosaurs are relatively scarce in this region of southwestern Europe. Therefore, this occurrence is also of overall interest due to its impact on Cretaceous Sauropoda palaeobiogeography.", "title": "" }, { "docid": "9b085f5cd0a080560d7ae17b7d4d6878", "text": "The commercial roll-type corona-electrostatic separators, which are currently employed for the recovery of metals and plastics from mm-size granular mixtures, are inappropriate for the processing of finely-grinded wastes. The aim of the present work is to demonstrate that a belt-type corona-electrostatic separator could be an appropriate solution for the selective sorting of conductive and non-conductive products contained in micronized wastes. The experiments are carried out on a laboratory-scale multi-functional electrostatic separator designed by the authors. The corona discharge is generated between a wire-type dual electrode and the surface of the metal belt conveyor. The distance between the wire and the belt and the applied voltage are adjusted to values that permit particles charging without having an electric wind that puts them into motion on the surface of the belt. The separation is performed in the electric field generated between a high-voltage roll-type electrode (diameter 30 mm) and the grounded belt electrode. The study is conducted according to experimental design methodology, to enable the evaluation of the effects of the various factors that affect the efficiency of the separation: position of the roll-type electrode and applied high-voltage. The conclusions of this study will serve at the optimum design of an industrial belt-type corona-electrostatic separator for the recycling of metals and plastics from waste electric and electronic equipment.", "title": "" }, { "docid": "6a5bda7936b657480713705273df1537", "text": "Predicting students’ academic performance is very crucial especially for higher educational institutions. This paper designed an application to assist higher education institutions to predict their students’ academic performance at an early stage before graduation and decrease students’ dropout. The performance of the students was measured based on cumulative grade point average (CGPA) at semester eight. The students’ course scores for core and non-core courses from the first semester to the sixth semester are used as predictor variables for predicting the final CGPA8 upon graduation using Neural Network (NN), Support Vector Regression(SVR), and Linear Regression (LR). The study has verified that data mining techniques can be used in predicting students’ academic performance in higher educational institutions. All the experiments gave valid results and can be used to predict graduation CGPA. However, comparisons of the experiments were done to determine which approaches perform better than others. Generally, SVR and LR methods performed better than NN. Therefore, we recommend the adoption of SVR and LR methods to predict final CGPA8, and the models can also be used to implement Student Performance Prediction System(SPPS) in a university.", "title": "" }, { "docid": "855c5c714100a38ec9702e3c8132bc23", "text": "With the development of mobile Internet and the information technology, mobile APP is closely linked with people' s life. However, it seriously hinders the user to find a required APP that quantity overload and homogenization of APP in mobile application store. Based on the Elaboration Likelihood Model, this article has explored the effects of online reviews on APP discoverability under different degree of involvement. It is concluded that the quality of online reviews has a great influence on APP discoverability when the degree of user involvement is high, while the degree of user involvement is low, the quantity of online reviews has a great influence. This study fills up the shortcomings of the research on online reviews to APP discoverability. At the same time, this paper makes recommendations to the application store and the developers for improving APP discoverability, and to promote the continuous development of the application market.", "title": "" }, { "docid": "a7456ecf7af7e447cdde61f371128965", "text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.", "title": "" }, { "docid": "44816a4274b275be9cd7ab6a4e14a966", "text": "t-distributed Stochastic Neighborhood Embedding (t-SNE), a clustering and visualization method proposed by van der Maaten&Hinton in 2008, has rapidly become a standard tool in a number of natural sciences. Despite its overwhelming success, there is a distinct lack of mathematical foundations and the inner workings of the algorithm are not well understood. The purpose of this paper is to prove that t-SNE is able to recover well-separated clusters; more precisely, we prove that t-SNE in the `early exaggeration' phase, an optimization technique proposed by van der Maaten&Hinton (2008) and van der Maaten (2014), can be rigorously analyzed. As a byproduct, the proof suggests novel ways for setting the exaggeration parameter $\\alpha$ and step size $h$. Numerical examples illustrate the effectiveness of these rules: in particular, the quality of embedding of topological structures (e.g. the swiss roll) improves. We also discuss a connection to spectral clustering methods.", "title": "" }, { "docid": "0c6c5fe1e81451ee5a7b4c7c4a37d423", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.03.028 ⇑ Corresponding author. Tel./fax: +98 2182883637. E-mail addresses: ar_hassanzadeh@modares.ac.ir com (A. Hassanzadeh), ftmh_kanani@yahoo.com (F. K (S. Elahi). 1 Measuring e-learning systems success. In the era of internet, universities and higher education institutions are increasingly tend to provide e-learning. For suitable planning and more enjoying the benefits of this educational approach, a model for measuring success of e-learning systems is essential. So in this paper, we try to survey and present a model for measuring success of e-learning systems in universities. For this purpose, at first, according to literature review, a conceptual model was designed. Then, based on opinions of 33 experts, and assessing their suggestions, research indicators were finalized. After that, to examine the relationships between components and finalize the proposed model, a case study was done in 5 universities: Amir Kabir University, Tehran University, Shahid Beheshti University, Iran University of Science & Technology and Khaje Nasir Toosi University of Technology. Finally, by analyzing questionnaires completed by 369 instructors, students and alumni, which were e-learning systems user, the final model (MELSS Model). 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3510bcd9d52729766e2abe2111f8be95", "text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.", "title": "" } ]
scidocsrr
efe6890e7d308875c177be396c3753e2
Motivation to learn: an overview of contemporary theories
[ { "docid": "f1c00253a57236ead67b013e7ce94a5e", "text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.", "title": "" } ]
[ { "docid": "c296244ea4283a43623d3a3aabd4d672", "text": "With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality.", "title": "" }, { "docid": "1af6549bfd46ab084143e91078a04151", "text": "The advances in 3D data acquisition techniques, graphics hardware, and 3D data modeling and visualizing techniques have led to the proliferation of 3D models. This has made the searching for specific 3D models a vital issue. Techniques for effective and efficient content-based retrieval of 3D models have therefore become an essential research topic. In this paper, a novel feature, called elevation descriptor, is proposed for 3D model retrieval. The elevation descriptor is invariant to translation and scaling of 3D models and it is robust for rotation. First, six elevations are obtained to describe the altitude information of a 3D model from six different views. Each elevation is represented by a gray-level image which is decomposed into several concentric circles. The elevation descriptor is obtained by taking the difference between the altitude sums of two successive concentric circles. An efficient similarity matching method is used to find the best match for an input model. Experimental results show that the proposed method is superior to other descriptors, including spherical harmonics, the MPEG-7 3D shape spectrum descriptor, and D2. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f9232e4a2d18a4cf6858b5739434273f", "text": "Face spoofing detection (i.e. face anti-spoofing) is emerging as a new research area and has already attracted a good number of works during the past five years. This paper addresses for the first time the key problem of the variation in the input image quality and resolution in face anti-spoofing. In contrast to most existing works aiming at extracting multiscale descriptors from the original face images, we derive a new multiscale space to represent the face images before texture feature extraction. The new multiscale space representation is derived through multiscale filtering. Three multiscale filtering methods are considered including Gaussian scale space, Difference of Gaussian scale space and Multiscale Retinex. Extensive experiments on three challenging and publicly available face anti-spoofing databases demonstrate the effectiveness of our proposed multiscale space representation in improving the performance of face spoofing detection based on gray-scale and color texture descriptors.", "title": "" }, { "docid": "e2cd9538192d717a9eaef6344cf0371e", "text": "Device-to-device (D2D) communication commonly refers to a type of technology that enable devices to communicate directly with each other without communication infrastructures such as access points (APs) or base stations (BSs). Bluetooth and WiFi-Direct are the two most popular D2D techniques, both working in the unlicensed industrial, scientific and medical (ISM) bands. Cellular networks, on the other hand, do not support direct over-the-air communications between users and devices. However, with the emergence of context-aware applications and the accelerating growth of Machine-to-Machine (M2M) applications, D2D communication plays an increasingly important role. It facilitates the discovery of geographically close devices, and enables direct communications between these proximate devices, which improves communication capability and reduces communication delay and power consumption. To embrace the emerging market that requires D2D communications, mobile operators and vendors are accepting D2D as a part of the fourth generation (4G) Long Term Evolution (LTE)-Advanced standard in 3rd Generation Partnership Project (3GPP) Release 12.", "title": "" }, { "docid": "f1b1dc51cf7a6d8cb3b644931724cad6", "text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.", "title": "" }, { "docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb", "text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.", "title": "" }, { "docid": "fd62cb306e6e39e7ead79696591746b2", "text": "Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.", "title": "" }, { "docid": "83da776714bf49c3bbb64976d20e26a2", "text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.", "title": "" }, { "docid": "5701585d5692b4b28da3132f4094fc9f", "text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.", "title": "" }, { "docid": "956690691cffe76be26bcbb45d88071c", "text": "We analyze different strategies aimed at optimizing routing policies in the Internet. We first show that for a simple deterministic algorithm the local properties of the network deeply influence the time needed for packet delivery between two arbitrarily chosen nodes. We next rely on a real Internet map at the autonomous system level and introduce a score function that allows us to examine different routing protocols and their efficiency in traffic handling and packet delivery. Our results suggest that actual mechanisms are not the most efficient and that they can be integrated in a more general, though not too complex, scheme.", "title": "" }, { "docid": "0c20d1fb99a0c52535dd712125b47dd9", "text": "In this paper, we explore the problem of license plate recognition in-the-wild (in the meaning of capturing data in unconstrained conditions, taken from arbitrary viewpoints and distances). We propose a method for automatic license plate recognition in-the-wild based on a geometric alignment of license plates as a preceding step for holistic license plate recognition. The alignment is done by a Convolutional Neural Network that estimates control points for rectifying the image and the following rectification step is formulated so that the whole alignment and recognition process can be assembled into one computational graph of a contemporary neural network framework, such as Tensorflow. The experiments show that the use of the aligner helps the recognition considerably: the error rate dropped from 9.6 % to 2.1 % on real-life images of license plates. The experiments also show that the solution is fast - it is capable of real-time processing even on an embedded and low-power platform (Jetson TX2). We collected and annotated a dataset of license plates called CamCar6k, containing 6,064 images with annotated corner points and ground truth texts. We make this dataset publicly available.", "title": "" }, { "docid": "d0985c38f3441ca0d69af8afaf67c998", "text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.", "title": "" }, { "docid": "2d43992a8eb6e97be676c04fc9ebd8dd", "text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.", "title": "" }, { "docid": "999a1fbc3830ca0453760595046edb6f", "text": "This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, ID embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures: In both systems, in quantitative experiments, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.", "title": "" }, { "docid": "09b35c40a65a0c2c0f58deb49555000d", "text": "There are a wide range of forensic and analysis tools to examine digital evidence in existence today. Traditional tool design examines each source of digital evidence as a BLOB (binary large object) and it is up to the examiner to identify the relevant items from evidence. In the face of rapid technological advancements we are increasingly confronted with a diverse set of digital evidence and being able to identify a particular tool for conducting a specific analysis is an essential task. In this paper, we present a systematic study of contemporary forensic and analysis tools using a hypothesis based review to identify the different functionalities supported by these tools. We highlight the limitations of the forensic tools in regards to evidence corroboration and develop a case for building evidence correlation functionalities into these tools.", "title": "" }, { "docid": "533b8bf523a1fb69d67939607814dc9c", "text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.", "title": "" }, { "docid": "d8634beb04329e72e462df98d31b2003", "text": "Link prediction is a key technique in many applications in social networks, where potential links between entities need to be predicted. Conventional link prediction techniques deal with either homogeneous entities, e.g., people to people, item to item links, or non-reciprocal relationships, e.g., people to item links. However, a challenging problem in link prediction is that of heterogeneous and reciprocal link prediction, such as accurate prediction of matches on an online dating site, jobs or workers on employment websites, where the links are reciprocally determined by both entities that heterogeneously belong to disjoint groups. The nature and causes of interactions in these domains makes heterogeneous and reciprocal link prediction significantly different from the conventional version of the problem. In this work, we address these issues by proposing a novel learnable framework called ReHeLP, which learns heterogeneous and reciprocal knowledge from collaborative information and demonstrate its impact on link prediction. Evaluation on a large commercial online dating dataset shows the success of the proposed method and its promise for link prediction.", "title": "" }, { "docid": "2f90f1d9ffb03e54fe5a29c17c7ebe2b", "text": "Exact matching of single patterns in DNA and amino acid sequences is studied. We performed an extensive experimental comparison of algorithms presented in the literature. In addition, we introduce new variations of earlier algorithms. The results of the comparison show that the new algorithms are efficient in practice.", "title": "" }, { "docid": "19e3338e136197d9d8ab57225f762161", "text": "We study the problem of combining multiple bandit algorithms (that is, online learning algorithms with partial feedback) with the goal of creating a master algorithm that performs almost as well as the best base algorithm if it were to be run on its own. The main challenge is that when run with a master, base algorithms unavoidably receive much less feedback and it is thus critical that the master not starve a base algorithm that might perform uncompetitively initially but would eventually outperform others if given enough feedback. We address this difficulty by devising a version of Online Mirror Descent with a special mirror map together with a sophisticated learning rate scheme. We show that this approach manages to achieve a more delicate balance between exploiting and exploring base algorithms than previous works yielding superior regret bounds. Our results are applicable to many settings, such as multi-armed bandits, contextual bandits, and convex bandits. As examples, we present two main applications. The first is to create an algorithm that enjoys worst-case robustness while at the same time performing much better when the environment is relatively easy. The second is to create an algorithm that works simultaneously under different assumptions of the environment, such as different priors or different loss structures.", "title": "" } ]
scidocsrr
3ed4ab459d441b3f3005912ea020cd85
Cognitive Status and Form of Reference in Multimodal Human-Computer Interaction
[ { "docid": "850854aeae187ffdd74c56135d9a4d5b", "text": "Dynamic interactive maps with transparent but powerful human interface capabilities are beginning to emerge for a variety of geographical information systems, including ones situated on portables for travelers, students, business and service people, and others working in field settings. In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modality (speech, writing, multimodal) and map display format (highly versus minimally structured) were varied in a within-subject factorial design as people completed realistic tasks with a simulated map system. The results identified a constellation of performance difficulties associated with speech-only map interactions, including elevated performance errors, spontaneous disfluencies, and lengthier task completion t ime-problems that declined substantially when people could interact multimodally with the map. These performance advantages also mirrored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial location. Analyses also indicated that map display format can be used to minimize performance errors and disfluencies, and map interfaces that guide users' speech toward brevity can nearly eliminate disfiuencies. Implications of this research are discussed for the design of high-performance multimodal interfaces for future map", "title": "" } ]
[ { "docid": "ba67c3006c6167550bce500a144e63f1", "text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.", "title": "" }, { "docid": "766b86047fd403586bd3339d46cf3036", "text": "A hybrid phase shifted full bridge (PSFB) and LLC half bridge (HB) dc-dc converter for low-voltage and high-current output applications is proposed in this paper. The PSFB shares its lagging leg with the LLC-HB and their outputs are parallel connected. When the output current is small, the energy of LLC circuit in combination with the energy stored in the leakage inductance of PSFB's transformer can help the lagging leg switches to realize ZVS turn on, which can reduce voltage stress and avoid annoying voltage spikes over switches. For the power distribution at rated load, the PSFB converter undergoes most of the power while the LLC-HB converter working as an auxiliary part converts only a small portion of the total power. To improve the conversion efficiency, synchronous rectification technique for the PSFB dc-dc converter is implemented. The design principle is given in view of ZVS for lagging leg switches and low transconductance of LLC converter. The validity of the proposed converter has been verified by experimental results of a 2.5kW prototype.", "title": "" }, { "docid": "9bbc279974aaa899d12fee26948ce029", "text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.", "title": "" }, { "docid": "139915d2aaf3698093b73ca81ebd7ad8", "text": "When caring for patients, it is essential that nurses are using the current best practice. To determine what this is, nurses must be able to read research critically. But for many qualified and student nurses, the terminology used in research can be difficult to understand, thus making critical reading even more daunting. It is imperative in nursing that care has its foundations in sound research, and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by-step approach to critiquing quantitative research to help nurses demystify the process and decode the terminology.", "title": "" }, { "docid": "13177a7395eed80a77571bd02a962bc9", "text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.", "title": "" }, { "docid": "78007b3276e795d76b692b40c4808c51", "text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.", "title": "" }, { "docid": "9516d06751aa51edb0b0a3e2b75e0bde", "text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.", "title": "" }, { "docid": "44f0a3e73ce1da840546600fde7fbabd", "text": "Suggested Citation: Berens, Johannes; Oster, Simon; Schneider, Kerstin; Burghoff, Julian (2018) : Early Detection of Students at Risk Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods, Schumpeter Discussion Papers, No. 2018-006, University of Wuppertal, Schumpeter School of Business and Economics, Wuppertal, http://nbn-resolving.de/urn:nbn:de:hbz:468-20180719-085420-5", "title": "" }, { "docid": "c3f943da2d68ee7980972a77c685fde6", "text": "*Correspondence: pwitbooi@uwc.ac.za Department of Mathematics and Applied Mathematics, University of the Western Cape, Private Bag X17, Bellville, 7535, Republic of South Africa Abstract Antiretroviral treatment (ART) and oral pre-exposure prophylaxis (PrEP) have recently been used efficiently in management of HIV infection. Pre-exposure prophylaxis consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals. We propose a new model for the transmission of HIV/AIDS including ART and PrEP. Our model can be used to test the effects of ART and of the uptake of PrEP in a given population, as we demonstrate through simulations. The model can also be used to estimate future projections of HIV prevalence. We prove global stability of the disease-free equilibrium. We also prove global stability of the endemic equilibrium for the most general case of the model, i.e., which allows for PrEP individuals to default. We include insightful simulations based on recently published South-African data.", "title": "" }, { "docid": "11bc0abc0aec11c1cf189eb23fd1be9d", "text": "Web spamming describes behavior that attempts to deceive search engine’s ranking algorithms. TrustRank is a recent algorithm that can combat web spam by propagating trust among web pages. However, TrustRank propagates trust among web pages based on the number of outgoing links, which is also how PageRank propagates authority scores among Web pages. This type of propagation may be suited for propagating authority, but it is not optimal for calculating trust scores for demoting spam sites. In this paper, we propose several alternative methods to propagate trust on the web. With experiments on a real web data set, we show that these methods can greatly decrease the number of web spam sites within the top portion of the trust ranking. In addition, we investigate the possibility of propagating distrust among web pages. Experiments show that combining trust and distrust values can demote more spam sites than the sole use of trust values.", "title": "" }, { "docid": "00ccf224c9188cf26f1da60ec9aa741b", "text": "In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across inputs. However, the predicted outputs (labels, and more generally structures) are still treated as discrete objects even though outputs are often not discrete units of meaning. In this paper, we present a new formulation for structured prediction where we represent individual labels in a structure as dense vectors and allow semantically similar labels to share parameters. We extend this representation to larger structures by defining compositionality using tensor products to give a natural generalization of standard structured prediction approaches. We define a learning objective for jointly learning the model parameters and the label vectors and propose an alternating minimization algorithm for learning. We show that our formulation outperforms structural SVM baselines in two tasks: multiclass document classification and part-of-speech tagging.", "title": "" }, { "docid": "30bf6e5874bc893f8762dc3b59af552b", "text": "Video-based facial expression recognition has received significant attention in recent years due to its widespread applications. One key issue for video-based facial expression analysis in practice is how to extract dynamic features. In this paper, a novel approach is presented using histogram sequence of local Gabor binary patterns from three orthogonal planes (LGBP-TOP). In this approach, every facial expression sequence is firstly convolved with the multi-scale and multi-orientation Gabor filters to extract the Gabor Magnitude Sequences (GMSs). Then, we use local binary patterns from three orthogonal planes (LBP-TOP) on each GMS to further enhance the feature extraction. Finally, the facial expression sequence is modeled as a histogram sequence by concatenating the histogram pieces of all the local regions of all the LGBP-TOP maps. For recognition, Support Vector Machine (SVM) is exploited. Our experimental results on the extended Cohn-Kanade database (CK+) demonstrate that the proposed method has achieved the best results compared to other methods in recent years.", "title": "" }, { "docid": "5882660c4741c485caf7bda69958d266", "text": "GSM is the most wide spread mobile communications system in the world. However the security of the GSM voice traffic is not guaranteed especially over the core network. It is highly desirable to have end-to-end secure communications over the GSM voice channel. In order to achieve end-to-end security, speech must be encrypted before it enters the GSM network. A modulation scheme that enables the transmission of encrypted voice and data over the GSM voice channel was designed1. A real-time prototype is implemented demonstrating the end-to-end secure voice communications over the GSM voice channel. The modem technology presented facilitates the transmission of encrypted data and an encryption algorithm is not specified. The users may choose an algorithm and a hardware platform as necessary.", "title": "" }, { "docid": "a51e2a0a7fd84fee7b4f91b033c3e182", "text": "Background. We examined body image perception and its association with reported weight-control behavior among adolescents in the Seychelles. Methods. We conducted a school-based survey of 1432 students aging 11-17 years in the Seychelles. Perception of body image was assessed using both a closed-ended question (CEQ) and Stunkard's pictorial silhouettes (SPS). Voluntary attempts to change weight were also assessed. Results. A substantial proportion of the overweight students did not consider themselves as overweight (SPS: 24%, CEQ: 34%), and a substantial proportion of the normal-weight students considered themselves as too thin (SPS: 29%, CEQ: 15%). Logistic regression analysis showed that students with an accurate weight perception were more likely to have appropriate weight-control behavior. Conclusions. We found that substantial proportions of students had an inaccurate perception of their weight and that weight perception was associated with weight-control behavior. These findings point to forces that can drive the upwards overweight trends.", "title": "" }, { "docid": "d2e6aa2ab48cdd1907f3f373e0627fa8", "text": "We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in [17] on CIFAR-10 show encouraging results.", "title": "" }, { "docid": "0ce4a0dfe5ea87fb87f5d39b13196e94", "text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.", "title": "" }, { "docid": "b96aff6851ca67c274ab4ef0121ca149", "text": "In this paper we consider the problem of human pose estimation in real-world videos of swimmers. Swimming channels allow filming swimmers simultaneously above and below the water surface with a single stationary camera. These recordings can be used to quantitatively assess the athletes' performance. The quantitative evaluation, so far, requires manual annotations of body parts in each video frame. We therefore apply the concept of CNNs in order to automatically infer the required pose information. Starting with an off-the-shelf architecture, we develop extensions to leverage activity information – in our case the swimming style of an athlete – and the continuous nature of the video recordings. Our main contributions are threefold: (a) We apply and evaluate a fine-tuned Convolutional Pose Machine architecture as a baseline in our very challenging aquatic environment and discuss its error modes, (b) we propose an extension to input swimming style information into the fully convolutional architecture and (c) modify the architecture for continuous pose estimation in videos. With these additions we achieve reliable pose estimates with up to +16% more correct body joint detections compared to the baseline architecture.", "title": "" }, { "docid": "fd18b3d4799d23735c48bff3da8fd5ff", "text": "There is need for an Integrated Event Focused Crawling system to collect Web data about key events. When a disaster or other significant event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of event information. We propose intelligent event focused crawling for automatic event tracking and archiving, ultimately leading to effective access. We developed an event model that can capture key event information, and incorporated that model into a focused crawling algorithm. For the focused crawler to leverage the event model in predicting webpage relevance, we developed a function that measures the similarity between two event representations. We then conducted two series of experiments to evaluate our system about two recent events: California shooting and Brussels attack. The first experiment series evaluated the effectiveness of our proposed event model representation when assessing the relevance of webpages. Our event model-based representation outperformed the baseline method (topic-only); it showed better results in precision, recall, and F1-score with an improvement of 20% in F1-score. The second experiment series evaluated the effectiveness of the event model-based focused crawler for collecting relevant webpages from the WWW. Our event model-based focused crawler outperformed the state-of-the-art baseline focused crawler (best-first); it showed better results in harvest ratio with an average improvement of 40%.", "title": "" }, { "docid": "a60720be4018e744d9e99c68d29f24c5", "text": "Edentulism can be a debilitating handicap. Zarb described endentulous individuals who could not function as 'denture cripples'. Most difficulty with complete denture prostheses arises from the inability to function with the mandibular prostheses. Factors that adversely affect successful use of a complete denture on the mandible include: 1) the mobility of the floor of the mouth, 2) thin mucosa lining the alveolar ridge, 3) reduced support area and 4) the motion of the mandible (Figs 1,2). These factors alone can explain the difficulty of wearing a denture on the mandibular arch compared to the maxillary arch. The maxilla exhibits much less mobility on the borders of the denture than the mandible, moreover having a stable palate with thick fibrous tissues available to support the prostheses and resist occlusal forces. These differences explain most of the reasons why patients experience difficulty with using a complete denture on the mandibular arch compared to the maxillary arch.", "title": "" }, { "docid": "517a7833e209403cb3db6f3e58c5f3e4", "text": "Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.", "title": "" } ]
scidocsrr
549184a3cbb356783c77048c86fe6012
GLAD: Group Anomaly Detection in Social Media Analysis
[ { "docid": "19d4662287a5c3ce1cef85fa601b74ba", "text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.", "title": "" } ]
[ { "docid": "523fae58b0da2d96c2b3b126480d8302", "text": "Many online shopping malls in which explicit rating information is not available still have difficulty in providing recommendation services using collaborative filtering (CF) techniques for their users. Applying temporal purchase patterns derived from sequential pattern analysis (SPA) for recommendation services also often makes users unhappy with the inaccurate and biased results obtained by not considering individual preferences. The objective of this research is twofold. One is to derive implicit ratings so that CF can be applied to online transaction data even when no explicit rating information is available, and the other is to integrate CF and SPA for improving recommendation quality. Based on the results of several experiments that we conducted to compare the performance between ours and others, we contend that implicit rating can successfully replace explicit rating in CF and that the hybrid approach of CF and SPA is better than the individual ones. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9b176a25a16b05200341ac54778a8bfc", "text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.", "title": "" }, { "docid": "61b6021f99649010437096abc13119ed", "text": "Given electroencephalogram (EEG) data measured from several subjects under the same conditions, our goal is to estimate common task-related bases in a linear model that capture intra-subject variations as well as inter-subject variations. Such bases capture the common phenomenon in group data, which is a core of group analysis. In this paper we present a method of nonnegative matrix factorization (NMF) that is well suited to analyzing EEG data of multiple subjects. The method is referred to as group nonnegative matrix factorization (GNMF) where we seek task-related common bases reflecting both intra-subject and inter-subject variations, as well as bases involving individual characteristics. We compare GNMF with NMF and some modified NMFs, in the task of learning spectral features from EEG data. Experiments on brain computer interface (BCI) competition data indicate that GNMF improves the EEG classification performance. In addition, we also show that GNMF is useful in the task of subject-tosubject transfer where the prediction for an unseen subject is performed based on a linear model learned from different subjects in the same group.", "title": "" }, { "docid": "d8d102c3d6ac7d937bb864c69b4d3cd9", "text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.", "title": "" }, { "docid": "dd0a1a3d6de377efc0a97004376749b6", "text": "Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales.", "title": "" }, { "docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4", "text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.", "title": "" }, { "docid": "4eebd4a2d5c50a2d7de7c36c5296786d", "text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.", "title": "" }, { "docid": "1d949b64320fce803048b981ae32ce38", "text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.", "title": "" }, { "docid": "c92e7bf3b01e8beaf4d24ec2f6ae805e", "text": "In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing.", "title": "" }, { "docid": "cec597fa08571d3ff7d8a80b9ded1745", "text": "According to the Merriam-Webster dictionary, satire is a trenchant wit, irony, or sarcasm used to expose and discredit vice or folly. Though it is an important language aspect used in everyday communication, the study of satire detection in natural text is often ignored. In this paper, we identify key value components and features for automatic satire detection. Our experiments have been carried out on three datasets, namely, tweets, product reviews and newswire articles. We examine the impact of a number of state-of-the-art features as well as new generalized textual features. By using these features, we outperform the state of the art by a significant 6% margin.", "title": "" }, { "docid": "62e445cabbb5c79375f35d7b93f9a30d", "text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.", "title": "" }, { "docid": "2a258c1a2e723e998a7bad6708b542a2", "text": "Contents Preface xi Acknowledgements xiii Endorsement xv About the authors xvii 1 A brief history of control and simulation 1", "title": "" }, { "docid": "2faa73eec710382a6f3d658562bf7928", "text": "We appreciate the comments provided by Thompson et al. in their Letter to the Editor, regarding our study BThe myth: in vivo degradation of polypropylene-based meshes^ [1]. However, we question the motives of the authors, who have notably disclosed that they provide medicolegal testimony on behalf of the plaintiffs in mesh litigation, for bringing their courtroom rhetoric into this discussion. Thompson et al. grossly erred in claiming that we only analyzed the exposed surface of the explants, and not the flaked material that had been removed when cleaning the explants (Bremoved material^) and ended up in the cleaning solution. As stated in our paper, however, the flaked material was analyzed using light microscopy (LM), scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) microscopy before cleaning and after each of the five sequences of the overall cleaning process. Analyzing the cleaning solution would be redundant and therefore serve no purpose, i.e., the material on the surface was already analyzed and then ended up in the cleaning solution. Based on our chemical and microscopic analyses (LM, SEM, and FTIR), we concluded that the explanted Prolene meshes that we examined did not degrade or oxidize in vivo. Thompson et al. noted that that there are Bwell over 100 peer-reviewed articles, accepting or describing the degradation of PP [polypropylene] in variable conditions and degradation of other implantable polymers in the body .̂ They also claimed that they are not aware of any other peer-reviewed journal articles supporting the notion that PP does not degrade in the body. As stated in our paper, it is well documented that unstabilized PP oxidizes readily under ultraviolet (UV) light and upon exposure to high temperatures. However, as we also discuss and cite to in our paper, properly formulated PP is stable in oxidizing media, including elevated temperatures, in in vivo applications, and to a lesser extent, under UV light. Thompson et al. further claimed that our study Bdoes not explain the multiple features of PP degradation reported in the literature.^ This is an erroneous statement because they must have either failed to review or chose to ignore the discussion of the literature in our paper. For instance, the literature is replete with the chemistry of PP degradation, confirming simultaneous production of carbonyl groups and loss of molecular weight. It is well known chemistry that oxidative degradation of PP produces carbonyl groups and if there is no carbonyl group formation, there is no oxidative degradation. To further highlight this point, Clavé et al. [2] have often been cited as supporting the notion that PP degrades in vivo, and as discussed in our manuscript, their findings and statements in the study confirmed that they were unable to prove the existence of PP degradation from any of their various tests. They further failed to include that Liebert’s investigation reported explicitly that stabilized PP, such as Prolene, did not degrade. Thompson et al. also claimed that the degradation process for PP continues until no more PP can be oxidized, with the corresponding appearance of external surface features and hardening and shrinkage of the material. The fallacy of their statement, in the context of the explanted meshes that we examined, is highlighted by the clean fibers that retained their manufacturing extrusion lines and the lack of a wide range of crack morphology (e.g., varying crack depths into the core of the PP fibers) for a given explant and across explants from different patients with different implantation durations. This reply refers to the comment available at doi:10.1007/s00192-016-3233-z.", "title": "" }, { "docid": "1d8b13738cc83d9b892ae716adf28f56", "text": "In the 21 st century, organisations cannot succeed in marketing by focusing only on the marketing mix without a focus on its impact on creating customer loyalty. Customer loyalty is considered to be a key ingredient in enhancing the survival of businesses especially in the situations faced by highly competitive industries. While the antecedents of customer loyalty connected with the marketing mix factors have been well investigated, much still remain regarding some of the intermediate conditions created by the marketing mix factors and customer loyalty. This study sought to investigate the relationship between corporate image and customer loyalty in the mobile telecommunication market in Kenya. The study was guided by several hypotheses that tested the nature of the relationship between four aspects of corporate image and customer loyalty. The study adopted the descriptive survey research design and used a multistage stratified sampling technique to obtain 320 respondents from among students across the campuses of Kenyatta University. Primary data was obtained with questionnaire and analysed using Pearson product-moment correlation coefficient and regression analysis to test the degree of association between the dependent and the independent variables with the aid of the Statistical Product and Service Solutions (SPSS). The new findings of the study showed a positive and statistically significant relationship between the dimensions of corporate image and customer loyalty. The variables significantly predicted customer loyalty. The reported findings in the study raise implications for marketing theory and practice suitable to inform strategic decisions for firms in the telecommunication sector in Kenya.", "title": "" }, { "docid": "b77e862bda1660a57a38e5ef7135eb2a", "text": "In the present days of rapid adoption of cloud services and varying cost models it is imperative to have a system which converts the cloud resource usage data into a measurable and payable cost. The system should be able to collect the cloud resource usage data, process it against a dynamically determined rate to get a charge for which the customer could be billed upon. These operations need to be done in a secured manner with appropriate authentication and authorization checks. As the cloud services are still evolving, the rating, charging and billing components for cloud are at a nascent stage of its maturity. In this paper we are describing a novel modular and micro services based design & architecture for developing a dynamic rating, charging and billing for cloud service providers with emphasis being given to the validation of the architecture through mathematical modeling and the implementation experience of the micro services.", "title": "" }, { "docid": "d48430f65d844c92661d3eb389cdb2f2", "text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.", "title": "" }, { "docid": "75cea8f2afbcd65c2a8c024ed1a1efcd", "text": "Communications in datacenter jobs (such as the shuffle operations in MapReduce applications) often involve many parallel flows, which may be processed simultaneously. This highly parallel structure presents new scheduling challenges in optimizing job-level performance objectives in data centers. Chowdhury and Stoica introduced the coflow abstraction to capture these communication patterns, and recently Chowdhury et al. developed effective heuristics to schedule coflows. In this paper, we consider the problem of efficiently scheduling coflows with release dates so as to minimize the total weighted completion time, which has been shown to be strongly NP-hard. Our main result is the first polynomial-time deterministic approximation algorithm for this problem, with an approximation ratio of 67/3, and a randomized version of the algorithm, with a ratio of 9+16√2/3. Our results use techniques from both combinatorial scheduling and matching theory, and rely on a clever grouping of coflows. We also run experiments on a Facebook trace to test the practical performance of several algorithms, including our deterministic algorithm. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that our deterministic algorithm has near-optimal performance.", "title": "" }, { "docid": "67cd0b0caa271c60737f82cf2dc42c1c", "text": "We unify recent neural approaches to one-shot learning with older ideas of associative memory in a model for metalearning. Our model learns jointly to represent data and to bind class labels to representations in a single shot. It builds representations via slow weights, learned across tasks through SGD, while fast weights constructed by a Hebbian learning rule implement one-shot binding for each new task. On the Omniglot, Mini-ImageNet, and Penn Treebank one-shot learning benchmarks, our model achieves state-of-the-art results.", "title": "" }, { "docid": "279d6de6ed6ade25d5ac0ff3d1ecde49", "text": "This paper explores the relationship between TV viewership ratings for Scandinavian's most popular talk show, Skavlan and public opinions expressed on its Facebook page. The research aim is to examine whether the activity on social media affects the number of viewers per episode of Skavlan, how the viewers are affected by discussions on the Talk Show, and whether this creates debate on social media afterwards. By analyzing TV viewer ratings of Skavlan talk show, Facebook activity and text classification of Facebook posts and comments with respect to type of emotions and brand sentiment, this paper identifes patterns in the users' real-world and digital world behaviour.", "title": "" }, { "docid": "480c8d16f3e58742f0164f8c10a206dd", "text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.", "title": "" } ]
scidocsrr
f45ec165f738561410ac737cb9fd6c78
Measuring Word Relatedness Using Heterogeneous Vector Space Models
[ { "docid": "502abb9980735a090a2f2a8b7510af9b", "text": "This paper presents and compares WordNetbased and distributional similarity approaches. The strengths and weaknesses of each approach regarding similarity and relatedness tasks are discussed, and a combination is presented. Each of our methods independently provide the best results in their class on the RG and WordSim353 datasets, and a supervised combination of them yields the best published results on all datasets. Finally, we pioneer cross-lingual similarity, showing that our methods are easily adapted for a cross-lingual task with minor losses.", "title": "" } ]
[ { "docid": "64e0a1345e5a181191c54f6f9524c96d", "text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.", "title": "" }, { "docid": "6fca16eee82aab56c170b3186a58dca3", "text": "Deep learning algorithms and networks are vulnerable to perturbed inputs which is known as adversarial attack. Many defense methodologies have been investigated to defend against such adversarial attack. In this work, we propose a novel methodology to defend the existing powerful attack model. We for the first time introduce a new attacking scheme for the attacker and set a practical constraint for white box attack. Under this proposed attacking scheme we present the best defense ever reported against some of the recent strong attacks. It consists of a set of non linear function to process the input data which will make it more robust over adversarial attack. However, we make this processing layer completely hidden from the attacker. Blind pre-processing improves the white box attack accuracy of MNIST from 94.3% to 98.7%. Even with increasing defense when others defenses completely fails, blind preprocessing remains one of the strongest ever reported. Another strength of our defense is that, it eliminates the need for adversarial training as it can significantly increase the MNIST accuracy without adversarial training as well. Additionally, blind pre-processing can also increase the inference accuracy in the face of powerful attack on Cifar-10 and SVHN data set as well without much sacrificing clean data accuracy.", "title": "" }, { "docid": "d940c448ef854fd8c50bdf08a03cd008", "text": "The Multi-task Cascaded Convolutional Networks (MTCNN) has recently demonstrated impressive results on jointly face detection and alignment. By using the hard sample ming and training a model on FER2013 datasets, we exploit the inherent correlation between face detection and facial express-ion recognition, and report the results of facial expression recognition based on MTCNN.", "title": "" }, { "docid": "e41b26803013aa1562ea0f8ff16860c3", "text": "The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.", "title": "" }, { "docid": "3223d52743a64bc599488cdde8ef177b", "text": "The resolution of a comparator is determined by the dc input offset and the ac noise. For mixed-mode applications with significant digital switching, input-referred supply noise can be a significant source of error. This paper proposes an offset compensation technique that can simultaneously minimize input-referred supply noise. Demonstrated with digital offset compensation, this scheme reduces input-referred supply noise to a small fraction (13%) of one least significant bit (LSB) digital offset. In addition, the same analysis can be applied to analog offset compensation.", "title": "" }, { "docid": "c40e969fe24c950d293e768fd4de3435", "text": "The volume of structured and unstructured data has grown at exponential scale in recent days. As a result of this rapid data growth, we are always inundated with plethora of choices in any product or service. It is very natural to get lost in the amazon of such choices and finding hard to make decisions. The project aims at addressing this problem by using entity recommendation. The two main aspects that the project concentrates on are implementing and presenting more accurate entity recommendations to the user and another is dealing with vast amount of data. The project aims at presenting recommendation results according to user’s query with efficiency and accuracy. Project makes use of ListNet ranking algorithm to rank the recommendation results. Query independent features and query dependent features are used to come up with ranking scores. Ranking scores decide the order in which the recommendation results are presented to the user. Project makes use of Apache Spark, a distributed big­data processing framework. Spark gives the advantage of handling iterative and interactive algorithms with efficiency and minimal processing time as compared to traditional map­reduce paradigm. We performed the experiments for recommendation engine using DBPedia as the dataset and tested the results for movie domain. We used both query­independent (pagerank) and query­dependent (click­logs) features for ranking purposes. We observed that ListNet algorithm performs really well by making use of Apache Spark as", "title": "" }, { "docid": "25f2763cd7c71cadf8f86f042841cd48", "text": "This study investigated to design the trajectory tracking controller for wheeled inverted pendulum robot using tilt angle control. It introduced as follows; 3DOF model of wheeled inverted pendulum robot was derived by Lagrangian multiplier method. The trajectory tracking algorithm was redesigned in order to track the trajectory by forward and backward motions. The update algorithm is used to move the way point, which is desired position for servo controller. The tilt angle control was designed to control the tilt angle and the robot's motion. Initial condition and simulation result, which were described the initial condition for simulating the motion and the result of the trajectory tracking using the tilt angle control. The result exhibited that the robot can performs to track the trajectory well. However, some errors might be occurred when the robot performs the steering motion but the robot has a function able to perform to the goal successfully.", "title": "" }, { "docid": "b9d75f0cbef5b9ea4437e32f27b56650", "text": "User demographics, such as age, gender and ethnicity, are routinely used for targeting content and advertising products to users. Similarly, recommender systems utilize user demographics for personalizing recommendations and overcoming the cold-start problem. Often, privacy-concerned users do not provide these details in their online profiles. In this work, we show that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics. Focusing on gender, we design techniques for effectively adding ratings to a user's profile for obfuscating the user's gender, while having an insignificant effect on the recommendations provided to that user.", "title": "" }, { "docid": "08331361929f3634bc705221ec25287c", "text": "The present study used pleasant and unpleasant music to evoke emotion and functional magnetic resonance imaging (fMRI) to determine neural correlates of emotion processing. Unpleasant (permanently dissonant) music contrasted with pleasant (consonant) music showed activations of amygdala, hippocampus, parahippocampal gyrus, and temporal poles. These structures have previously been implicated in the emotional processing of stimuli with (negative) emotional valence; the present data show that a cerebral network comprising these structures can be activated during the perception of auditory (musical) information. Pleasant (contrasted to unpleasant) music showed activations of the inferior frontal gyrus (IFG, inferior Brodmann's area (BA) 44, BA 45, and BA 46), the anterior superior insula, the ventral striatum, Heschl's gyrus, and the Rolandic operculum. IFG activations appear to reflect processes of music-syntactic analysis and working memory operations. Activations of Rolandic opercular areas possibly reflect the activation of mirror-function mechanisms during the perception of the pleasant tunes. Rolandic operculum, anterior superior insula, and ventral striatum may form a motor-related circuitry that serves the formation of (premotor) representations for vocal sound production during the perception of pleasant auditory information. In all of the mentioned structures, except the hippocampus, activations increased over time during the presentation of the musical stimuli, indicating that the effects of emotion processing have temporal dynamics; the temporal dynamics of emotion have so far mainly been neglected in the functional imaging literature.", "title": "" }, { "docid": "ebbc0b7aea9fafa1258f337fab4d20e8", "text": "This paper presents a new design of high frequency DC/AC inverter for home applications using fuel cells or photovoltaic array sources. A battery bank parallel to the DC link is provided to take care of the slow dynamic response of the source. The design is based on a push-pull DC/DC converter followed by a full-bridge PWM inverter topology. The nominal power rating is 10 kW. Actual design parameters, procedure and experimental results of a 1.5 kW prototype are provided. The objective of this paper is to explore the possibility of making renewable sources of energy utility interactive by means of low cost power electronic interface.", "title": "" }, { "docid": "934ca8aa2798afd6e7cd4acceeed839a", "text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.", "title": "" }, { "docid": "a5da5d415f58221eeef0b3d4adcd466a", "text": "Two fundamental problems in computational game theory are computing a Nash equilibrium and learning to exploit opponents given observations of their play (opponent exploitation). The latter is perhaps even more important than the former: Nash equilibrium does not have a compelling theoretical justification in game classes other than two-player zero-sum, and for all games one can potentially do better by exploiting perceived weaknesses of the opponent than by following a static equilibrium strategy throughout the match. The natural setting for opponent exploitation is the Bayesian setting where we have a prior model that is integrated with observations to create a posterior opponent model that we respond to. The most natural, and a well-studied prior distribution is the Dirichlet distribution. An exact polynomial-time algorithm is known for best-responding to the posterior distribution for an opponent assuming a Dirichlet prior with multinomial sampling in normal-form games; however, for imperfect-information games the best known algorithm is based on approximating an infinite integral without theoretical guarantees. We present the first exact algorithm for a natural class of imperfect-information games. We demonstrate that our algorithm runs quickly in practice and outperforms the best prior approaches. We also present an algorithm for a uniform prior.", "title": "" }, { "docid": "5ee544ed19ef78fa9212caea791ac4cf", "text": "This paper describes the ecosystem of R add-on packages deve lop d around the infrastructure provided by the packagearules. The packages provide comprehensive functionality for ana lyzing interesting patterns including frequent itemsets, associ ati n rules, frequent sequences and for building applications like associative classification. After di scussing the ecosystem’s design we illustrate the ease of mining and visualizing rules with a short example .", "title": "" }, { "docid": "fdda2a3c2148fcfd79bda7d688410b0b", "text": "Large scale dissemination of power grid entities such as distributed energy resources (DERs), electric vehicles (EVs), and smart meters has provided diverse challenges for Smart Grid automation. Novel control models such as virtual power plants (VPPs), microgrids, and smart houses introduce a new set of automation and integration demands that surpass capabilities of currently deployed solutions. Therefore, there is a strong need for finding an alternative technical approach, which can resolve identified issues and fulfill automation prerequisites implied by the Smart Grid vision. This paper presents a novel standards-compliant solution for accelerated Smart Grid integration and automation based on semantic services. Accordingly, two most influential industrial automation standards, IEC 61850 and OPC Unified Architecture (OPC UA) have been extensively analyzed in order to provide a value-added service-oriented integration framework for the Smart Grid.", "title": "" }, { "docid": "0761383a10519f2c2f1aac702c1399c7", "text": "The IOT is a huge and widely distributed the Internet that things connect things.It connects all the articles to the internet through information sensing devices. It is the second information wave after Computer, Internet and mobile communication network. With the rapid development of the Internet of Things, its security problems have become more concentrated. This paper addresses the security issues and key technologies in IOT. It elaborated the basic concepts and the principle of the IOT and combined the relevant characteristics of the IOT as well as the International main research results to analysis the security issues and key technologies of the IOT which in order to plays a positive role in the construction and the development of the IOT through the research.", "title": "" }, { "docid": "2d99dbf227809c6ef1c4c94be31cc192", "text": "The Eastern Band of the Cherokee Indians live in one of the planet’s most floristically diverse temperate zone environments. Their relationship with the local flora was initially investigated by James Mooney and revisited by subsequent researchers such as Frans Olbrechts, John Witthoft, and William Banks, among others. This work interprets the collective data recorded by Cherokee ethnographers, much of it in the form of unpublished archival material, as it reflects the Cherokee ethnobotanical classification system and their medical ethnobotany. Mooney’s proposed classification system for the Cherokee is remarkably similar to contemporary models of folk biological classification systems. His recognition of this inherent system, 60 years before contemporary models were proposed, provides evidence for their universality in human cognition. Examination of the collective data concerning Cherokee medical ethnobotany provides a basis for considering change in Cherokee ethnobotanical knowledge, for reevaluation of the statements of the various researchers, and a means to explore trends that were not previously apparent. Index Words: Eastern Band of the Cherokee Indians, Ethnobiological Classification Systems, Ethnohistory, Ethnomedicine, Historical Ethnobotany, Medical Ethnobotany, Native American Medicine, Tradition Botanical Knowledge. ETHNOBOTANICAL CLASSIFICATION SYSTEM AND MEDICAL ETHNOBOTANY OF THE EASTERN BAND OF THE CHEROKEE INDIANS", "title": "" }, { "docid": "1041bc70b6ee8f8a3bffa30b624f9ae7", "text": "We conducted a double-blind, placebo-controlled study of acyclovir prophylaxis against infection with herpes simplex virus (HSV) in 20 seropositive recipients of bone-marrow transplants. Acyclovir or placebo was administered for 18 days, starting three days before transplantation. Culture-positive HSV lesions developed during the study in seven of the 10 patients who received placebo. In contrast, no such lesions appeared in the 10 patients who received acyclovir (P congruent to 0.003). None of the patients had evidence of drug toxicity. Five of the patients treated with acyclovir had mild culture-positive HSV infections after cessation of the drug, and two additional patients shed virus without having lesions. Acyclovir appears to be a potent inhibitor of HSV replication. Although acyclovir does no appear to eradicate latent infection, it can provide effective prophylaxis against reactivated infections.", "title": "" }, { "docid": "081592756c4ee3f7dcb8990ae30cfbd0", "text": "Existing research assessing human operators' trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human's entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of \"area under the trust curve\" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the \"cry wolf\" effect -- wherein human operators begin to reject an automated system due to repeated false alarms.", "title": "" }, { "docid": "bbb1e41d86ec2507f829febf22dc6c13", "text": "Chirp-sequence-based Frequency Modulation Continuous Wave (FMCW) radar is effective at detecting range and velocity of a target. However, the target detection algorithm is based on two-dimensional Fast Fourier Transform, which uses a great deal of data over several PRIs (Pulse Repetition Intervals). In particular, if the multiple-receive channel is employed to estimate the angle position of a target; even more computational complexity is required. In this paper, we report on how a newly developed signal processing module is implemented in the FPGA, and on its performance measured under test conditions. Moreover, we have presented results from analysis of the use of hardware resources and processing times.", "title": "" }, { "docid": "f406cb895202ab26cfecfdbc30947733", "text": "A new set of lower limb orthoses was developed for the WalkTrainer project. This mobile reeducation device for paralyzed people allows overground gait training combining closed loop electrical muscle stimulation and lower limb guiding while walking. An active body weight support system offers precise body weight unloading during locomotion. A 6 DOF parallel robot moves the pelvis in any desired position and orientation. The lower extremity orthosis is composed of two key parts. First, a purely passive lightweight exoskeleton acts as the interface between the human leg and the machine. A 1 DOF knee orthotic joint is also designed to prevent hyperextension. Second, the active part - composed of a mechanical leg equipped with motors and sensors - is located behind each human leg, with its base fixed to the WalkTrainer base frame. The two kinematic chains are connected with appropriate linkages at the thigh and the ankle joint. Actuation of the hip, knee and ankle joints is thus provided for their flexion/extension axis. The active mechanism operates only within the sagittal plane and guides the ankle-foot subsystem. Thigh and shank add/abduction movements are possible and even essential since the pelvis moves in a 3D space. This achievement prevents the scissors effect while allowing natural walking motion at the other joints. This paper describes the design and development of the lower extremity orthosis. Starting from a biomechanical approach, the needed actuation and the mechanical structure are discussed as well as the interface between the patient and the robot.", "title": "" } ]
scidocsrr
905ad2b87ff98544f112200e17e8789d
Context Dependent Movie Recommendations Using a Hierarchical Bayesian Model
[ { "docid": "401c8a60d89af590925c13e7d22da2ff", "text": "We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems.", "title": "" }, { "docid": "3dab0441ca1e4fb39296be8006611690", "text": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user's interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.", "title": "" } ]
[ { "docid": "c8ba829a6b0e158d1945bbb0ed68045b", "text": "Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.", "title": "" }, { "docid": "518d8e621e1239a94f50be3d5e2982f9", "text": "With a number of emerging biometric applications there is a dire need of less expensive authentication technique which can authenticate even if the input image is of low resolution and low quality. Foot biometric has both the physiological and behavioral characteristics still it is an abandoned field. The reason behind this is, it involves removal of shoes and socks while capturing the image and also dirty feet makes the image noisy. Cracked heels is also a reason behind noisy images. Physiological and behavioral biometric characteristics makes it a great alternative to computational intensive algorithms like fingerprint, palm print, retina or iris scan [1] and face. On one hand foot biometric has minutia features which is considered totally unique. The uniqueness of minutiae feature is already tested in fingerprint analysis [2]. On the other hand it has geometric features like hand geometry which also give satisfactory results in recognition. We can easily apply foot biometrics at those places where people inherently remove their shoes, like at holy places such as temples and mosque people remove their shoes before entering from the perspective of faith, and also remove shoes at famous monuments such as The Taj Mahal, India from the perspective of cleanliness and preservation. Usually these are the places with a strong foot fall and high risk security due to chaotic crowd. Most of the robbery, theft, terrorist attacks, are happening at these places. One very fine example is Akshardham attack in September 2002. Hence we can secure these places using low cost security algorithms based on footprint recognition.", "title": "" }, { "docid": "3129b636e3739281ba59721765eeccb9", "text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.", "title": "" }, { "docid": "f527219bead3dd4d64132315a9f0ff77", "text": "Recently, the Internet of Things (IOT) has obtained rapid development and has a significant impact on the military field. This paper first proposes a conception of military internet of things (MIOT) and analyzes the architecture of MIOT in detail. Then, three modes of MIOT, i.e., information sensing, information transmission and information serving, are respectively studied to show various military domain applications. Finally, an application assumption of MIOT from the weapon control aspect is given to validate the proposed application modes.", "title": "" }, { "docid": "f555a50f629bd9868e1be92ebdcbc154", "text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.", "title": "" }, { "docid": "7eaf23745e25a7beb5183457599bcdaf", "text": "Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2(100) possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.", "title": "" }, { "docid": "b205346e003c429cd2b32dc759921643", "text": "Sentence correction has been an important emerging issue in computer-assisted language learning. However, existing techniques based on grammar rules or statistical machine translation are still not robust enough to tackle the common errors in sentences produced by second language learners. In this paper, a relative position language model and a parse template language model are proposed to complement traditional language modeling techniques in addressing this problem. A corpus of erroneous English-Chinese language transfer sentences along with their corrected counterparts is created and manually judged by human annotators. Experimental results show that compared to a state-of-the-art phrase-based statistical machine translation system, the error correction performance of the proposed approach achieves a significant improvement using human evaluation.", "title": "" }, { "docid": "d6e178e87601b2a7d442b97e42c34350", "text": "BACKGROUND\nNo systematic review and narrative synthesis on personal recovery in mental illness has been undertaken.\n\n\nAIMS\nTo synthesise published descriptions and models of personal recovery into an empirically based conceptual framework.\n\n\nMETHOD\nSystematic review and modified narrative synthesis.\n\n\nRESULTS\nOut of 5208 papers that were identified and 366 that were reviewed, a total of 97 papers were included in this review. The emergent conceptual framework consists of: (a) 13 characteristics of the recovery journey; (b) five recovery processes comprising: connectedness; hope and optimism about the future; identity; meaning in life; and empowerment (giving the acronym CHIME); and (c) recovery stage descriptions which mapped onto the transtheoretical model of change. Studies that focused on recovery for individuals of Black and minority ethnic (BME) origin showed a greater emphasis on spirituality and stigma and also identified two additional themes: culturally specific facilitating factors and collectivist notions of recovery.\n\n\nCONCLUSIONS\nThe conceptual framework is a theoretically defensible and robust synthesis of people's experiences of recovery in mental illness. This provides an empirical basis for future recovery-oriented research and practice.", "title": "" }, { "docid": "cb6d60c4948bcf2381cb03a0e7dc8312", "text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.", "title": "" }, { "docid": "9b16eaa154370895b446cc4e66c9a8a9", "text": "The 15 kV SiC N-IGBT is the state-of-the-art high voltage power semiconductor device developed by Cree. The SiC IGBT is exposed to a peak stress of 10-11 kV in power converter systems, with punch-through turn-on dv/dt over 100 kV/μs and turn-off dv/dt about 35 kV/μs. Such high dv/dt requires ultralow coupling capacitance in the dc-dc isolation stage of the gate driver for maintaining fidelity of the signals on the control-supply ground side. Accelerated aging of the insulation in the isolation stage is another serious concern. In this paper, a simple transformer based isolation with a toroid core is investigated for the above requirements of the 15 kV IGBT. The gate driver prototype has been developed with over 100 kV dc insulation capability, and its inter-winding coupling capacitance has been found to be 3.4 pF and 13 pF at 50 MHz and 100 MHz respectively. The performance of the gate driver prototype has been evaluated up to the above mentioned specification using double-pulse tests on high-side IGBT in a half-bridge configuration. The continuous testing at 5 kHz has been performed till 8 kV, and turn-on dv/dt of 85 kV/μs on a buck-boost converter. The corresponding experimental results are presented. Also, the test methodology of evaluating the gate driver at such high voltage, without a high voltage power supply is discussed. Finally, experimental results validating fidelity of the signals on the control-ground side are provided to show the influence of increased inter-winding coupling capacitance on the performance of the gate driver.", "title": "" }, { "docid": "812abd8ee942c352bd2b141e3c88ba21", "text": "Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many effort s have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector , by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9%), UCF50 (92.3%), and UCF101 (87.9%). © 2016 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "ac4b6ec32fe607e5e9981212152901f5", "text": "As an important matrix factorization model, Nonnegative Matrix Factorization (NMF) has been widely used in information retrieval and data mining research. Standard Nonnegative Matrix Factorization is known to use the Frobenius norm to calculate the residual, making it sensitive to noises and outliers. It is desirable to use robust NMF models for practical applications, in which usually there are many data outliers. It has been studied that the 2,1, or 1-norm can be used for robust NMF formulations to deal with data outliers. However, these alternatives still suffer from the extreme data outliers. In this paper, we present a novel robust capped norm orthogonal Nonnegative Matrix Factorization model, which utilizes the capped norm for the objective to handle these extreme outliers. Meanwhile, we derive a new efficient optimization algorithm to solve the proposed non-convex non-smooth objective. Extensive experiments on both synthetic and real datasets show our proposed new robust NMF method consistently outperforms related approaches.", "title": "" }, { "docid": "c224cc83b4c58001dbbd3e0ea44a768a", "text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.", "title": "" }, { "docid": "120c8676e1011d5461efab28769584ca", "text": "To effectively act on the same physical space, robots must first communicate to share and fuse the map of the area in which they operate. For long-term online operation, the merging of maps from heterogeneous devices must be fast and allow for scalable growth in both the number of clients and the size of the map. This paper presents a system which allows multiple clients to share and merge maps built from a state-of-the-art relative SLAM system. Maps can also be augmented with virtual elements that are consistently shared by all the clients. The visual-inertial mapping framework which underlies this system is discussed, along with the server architecture and novel integrated multi-session loop closure system. We show quantitative results of the system. The map fusion benefits are demonstrated with an example augmented reality application.", "title": "" }, { "docid": "1c89f0640163d09d70ca854d8ad486d5", "text": "This work proposes to learn autoencoders with sparse connections. Prior studies on autoencoders enforced sparsity on the neuronal activity; these are different from our proposed approach - we learn sparse connections. Sparsity in connections helps in learning (and keeping) the important relations while trimming the irrelevant ones. We have tested the performance of our proposed method on two tasks - classification and denoising. For classification we have compared against stacked autneencoders, contractive autoencoders, deep belief network, sparse deep neural network and optimal brain damage neural network; the denoising performance was compared against denoising autoencoder and sparse (activity) autoencoder. In both the tasks our proposed method yields superior results.", "title": "" }, { "docid": "97bf94b65caf7f4cfaf19699a69d856c", "text": "Customer churn, i.e., losing a customer to the competition, is a major problem in mobile telecommunications. This paper investigates the added value of combining regular tabular data mining with social network mining, leveraging the graph formed by communications between customers. We extend classical tabular churn datasets with predictors derived from social network neighborhoods. We also extend traditional social network spreading activation models with information from classical tabular churn models. Experiments show that in the second approach the combination of tabular and social network mining improves results, but overall the traditional tabular churn models score best.", "title": "" }, { "docid": "2cff48b7c30c310e0d334e5983ae8f1f", "text": "In this paper we introduce a low-latency monaural source separation framework using a Convolutional Neural Network (CNN). We use a CNN to estimate time-frequency soft masks which are applied for source separation. We evaluate the performance of the neural network on a database comprising of musical mixtures of three instruments: voice, drums, bass as well as other instruments which vary from song to song. The proposed architecture is compared to a Multilayer Perceptron (MLP), achieving on-par results and a significant improvement in processing time. The algorithm was submitted to source separation evaluation campaigns to test efficiency, and achieved competitive results.", "title": "" }, { "docid": "39c90853a781bf49223883bfa814c69d", "text": "The current state of the art of intestinal lymphatic transport is given by reviewing the more recent publications, which have utilized lipid-based vehicles. The results published often show variable trends depending on, the design of the vehicle, the components used, the physicochemical properties of the drug, the animal model and experimental techniques, these variables often make direct comparisons difficult. Traditionally intestinal lymphatic delivery has been expressed as a percentage of the dose transported in the lymph. Using this parameter results obtained to date, with lipid-based vehicles, are somewhat disappointing maximising at approximately 20-30%, for highly lipophilic compounds including DDT and halofantrine (Hf). Recent data, monitoring Hf, in a fed versus fasted dog study, have shown that a higher degree of lymphatic transport is possible (>50% dose) in the postprandial state, this study should result in stimulating renewed interest in the potential of achieving significant levels of lymphatic targeting. Although some relevant features controlling lymphatic transport have been identified over the years a deeper appreciation of all the mechanisms, which is vital for therapeutic exploitation of lymphatic transport, is still unrealized. This review analyses the success and limitations of a formulation approach using lipid-based vehicles and highlights potential areas for further research.", "title": "" }, { "docid": "a2e0163aebb348d3bfab7ebac119e0c0", "text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.", "title": "" }, { "docid": "c048e9d40670f07e642c00e1cb7874e0", "text": "Dietary carbohydrates are a group of chemically defined substances with a range of physical and physiological properties and health benefits. As with other macronutrients, the primary classification of dietary carbohydrate is based on chemistry, that is character of individual monomers, degree of polymerization (DP) and type of linkage (α or β), as agreed at the Food and Agriculture Organization/World Health Organization Expert Consultation in 1997. This divides carbohydrates into three main groups, sugars (DP 1–2), oligosaccharides (short-chain carbohydrates) (DP 3–9) and polysaccharides (DP⩾10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc. While effects of carbohydrates are ultimately related to their primary chemistry, they are modified by their physical properties. These include water solubility, hydration, gel formation, crystalline state, association with other molecules such as protein, lipid and divalent cations and aggregation into complex structures in cell walls and other specialized plant tissues. A classification based on chemistry is essential for a system of measurement, predication of properties and estimation of intakes, but does not allow a simple translation into nutritional effects since each class of carbohydrate has overlapping physiological properties and effects on health. This dichotomy has led to the use of a number of terms to describe carbohydrate in foods, for example intrinsic and extrinsic sugars, prebiotic, resistant starch, dietary fibre, available and unavailable carbohydrate, complex carbohydrate, glycaemic and whole grain. This paper reviews these terms and suggests that some are more useful than others. A clearer understanding of what is meant by any particular word used to describe carbohydrate is essential to progress in translating the growing knowledge of the physiological properties of carbohydrate into public health messages.", "title": "" } ]
scidocsrr
da7c33999df6356c6e09f9d41354842d
Managing the Pre- and Post-analytical Phases of the Total Testing Process
[ { "docid": "f1414fa3b4e4828489fd9da99892a795", "text": "PERSON APPROACH The long-standing and widespread tradition of the person approach focuses on the unsafe acts—errors and procedural violations—of people on the front line: nurses, physicians, surgeons, anesthetists, pharmacists, and the like. It views these unsafe acts as arising primarily from aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness. The associated countermeasures are directed mainly at reducing unwanted variability in human behavior. These methods include poster campaigns that appeal to people’s fear, writing another procedure (or adding to existing ones), disciplinary measures, threat of litigation, retraining, naming, blaming, and shaming. Followers of these approaches tend to treat errors as moral issues, assuming that bad things happen to bad people—what psychologists have called the “just-world hypothesis.”", "title": "" } ]
[ { "docid": "67fdf55cbd317cc46b871763772c777f", "text": "Aims: The objective of this study was to assess the current state of continuous auditing in the state departments in Kenya and to adapt a framework to implement continuous auditing by the Public Sector Audit Organization. Study Design: Adoption of existing model and survey using questionnaires. Place and Duration of Study: Kenya, 2013. Methodology: Existing continuous auditing models were studied and the Integrated Continuous Auditing, Monitoring and Assurance Conceptual Model was adopted for use. The model was tested using data collected using questionnaires. Data was collected from 76 auditors in the Public Sector Audit Organization. A database system of a government Ministry was used to demonstrate how data can be obtained directly from a client system. Results: The study found the need for training in the skills required for continuous auditing and the acquisition of IT resources and infrastructure were necessary in realizing continuous auditing. Conclusion: The paper shows that Public Sector Audit Organization in Kenya, like institutions in other countries such as USA [8] and Australia [11], are preparing to advance from traditional audit to continuous auditing. The Integrated Continuous Auditing, Monitoring and Assurance Conceptual Model would offer a good starting point. Original Research Article British Journal of Economics, Management & Trade, 4(11): 1644-1654, 2014 1645", "title": "" }, { "docid": "4816d3c4ca52f2ba592b29636b4a3c35", "text": "In this paper, we describe a system that applies maximum entropy (ME) models to the task of named entity recognition (NER). Starting with an annotated corpus and a set of features which are easily obtainable for almost any language, we first build a baseline NE recognizer which is then used to extract the named entities and their context information from additional nonannotated data. In turn, these lists are incorporated into the final recognizer to further improve the recognition accuracy.", "title": "" }, { "docid": "ef0c9796938bfb84c136276bd94d9a08", "text": "A review of some papers published in the last fifty years that focus on the semiconducting metal oxide (SMO) based sensors for the selective and sensitive detection of various environmental pollutants is presented.", "title": "" }, { "docid": "df6b1cb3efbababa8aa0a2c04b999cf0", "text": "A cognitive radio wireless sensor network is one of the candidate areas where cognitive techniques can be used for opportunistic spectrum access. Research in this area is still in its infancy, but it is progressing rapidly. The aim of this study is to classify the existing literature of this fast emerging application area of cognitive radio wireless sensor networks, highlight the key research that has already been undertaken, and indicate open problems. This paper describes the advantages of cognitive radio wireless sensor networks, the difference between ad hoc cognitive radio networks, wireless sensor networks, and cognitive radio wireless sensor networks, potential application areas of cognitive radio wireless sensor networks, challenges and research trend in cognitive radio wireless sensor networks. The sensing schemes suited for cognitive radio wireless sensor networks scenarios are discussed with an emphasis on cooperation and spectrum access methods that ensure the availability of the required QoS. Finally, this paper lists several open research challenges aimed at drawing the attention of the readers toward the important issues that need to be addressed before the vision of completely autonomous cognitive radio wireless sensor networks can be realized.", "title": "" }, { "docid": "74949417ff2ba47f153e05aac587e0dc", "text": "This review examines the descriptive epidemiology, and risk and protective factors for youth suicide and suicidal behavior. A model of youth suicidal behavior is articulated, whereby suicidal behavior ensues as a result of an interaction of socio-cultural, developmental, psychiatric, psychological, and family-environmental factors. On the basis of this review, clinical and public health approaches to the reduction in youth suicide and recommendations for further research will be discussed.", "title": "" }, { "docid": "c4c683504db4d10265c2eadd8f47107c", "text": "In this paper, an approach for industrial machine vision system is introduced for effective maintenance of inventory in order to minimize the production cost in supply chain network. The objective is to propose an efficient technique for object identification, localization, and report generation to monitor the inventory level in real time video stream based on the object appearance model. The appearance model is considered as visual signature by which individual object can be detected anywhere via camera feed. Herein, Speeded Up Robust Features (SURF) are used to identify the object. Firstly, SURF features are extracted from prototype image which refers to predefined template of individual objects, and then extracted from the camera feed of inventory i.e. scene image. Density based clustering on SURF points of prototype is done, followed by feature mapping of each cluster to SURF points in scene image. Homographic transforms are then used to obtain the location and mark the presence of objects in the scene image. Further, for better invariance to occlusion and faster computation, a novel method for tuning the hyper parameters of clustering is also proposed. The proposed methodology is found to be reliable and is able to give robust real time count of objects in inventory with invariance to scale, rotation and upto 70% of occlusion.", "title": "" }, { "docid": "6757bde927be1bf081ffd95908ebbbf3", "text": "Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.", "title": "" }, { "docid": "67136c5bd9277e0637393e9a131d7b53", "text": "BACKGROUND\nSynchronous written conversations (or \"chats\") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions.\n\n\nOBJECTIVE\nThe aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat.\n\n\nMETHODS\nA systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure.\n\n\nRESULTS\nA total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling).\n\n\nCONCLUSIONS\nFeasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.", "title": "" }, { "docid": "89e80aec26f494a83b425f8b49143ad5", "text": "This study focused on determining the barriers to effective municipal solid waste management (MSWM) in a rapidly urbanizing area in Thailand. The Tha Khon Yang Subdistrict Municipality is a representative example of many local governments in Thailand that have been facing MSWM issues. In-depth interviews with individuals and focus groups were conducted with key informants including the municipality staff, residents, and external organizations. The major influences affecting waste management were categorized into six areas: social-cultural, technical, financial, organizational, and legal-political barriers and population growth. SWOT analysis shows both internal and external factors are playing a role in MSWM: There is good policy and a reasonably sufficient budget. However, there is insufficient infrastructure, weak strategic planning, registration, staff capacity, information systems, engagement with programs; and unorganized waste management and fee collection systems. The location of flood prone areas has impacted on location and operation of landfill sites. There is also poor communication between the municipality and residents and a lack of participation in waste separation programs. However, external support from government and the nearby university could provide opportunities to improve the situation. These findings will help inform municipal decision makers, leading to better municipal solid waste management in newly urbanized areas.", "title": "" }, { "docid": "ca5ad8301e3a37a6d2749bb27ede1d7a", "text": "Data and connectivity between users form the core of social networks. Every status, post, friendship, tweet, re-tweet, tag or image generates a massive amount of structured and unstructured data. Deriving meaning from this data and, in particular, extracting behavior and emotions of individual users, as well as of user communities, is the goal of sentiment analysis and affective computing and represents a significant challenge. Social networks also represent a potentially infinite source of applications for both research and commercial purposes and are adaptable to many different areas, including life science. Nevertheless, collecting, sharing, storing and analyzing social networks data pose several challenges to computer scientists, such as the management of highly unstructured data, big data, and the need for real-time computation. In this paper we give a brief overview of some concrete examples of applying sentiment analysis to social networks for healthcare purposes, we present the current type of tools existing for sentiment analysis, and summarize the challenges involved in this process focusing on the role of high performance computing.", "title": "" }, { "docid": "c0e1063578db251667a995526ad27e92", "text": "Automatic skin lesion segmentation on dermoscopic images is an essential component in computer-aided diagnosis of melanoma. Recently, many fully supervised deep learning based methods have been proposed for automatic skin lesion segmentation. However, these approaches require massive pixel-wise annotation from experienced dermatologists, which is very costly and time-consuming. In this paper, we present a novel semi-supervised method for skin lesion segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data. To utilize the unlabeled data, our method encourages the consistent predictions of the network-in-training for the same input under different regularizations. Aiming for the semi-supervised segmentation problem, we enhance the effect of regularization for pixel-level predictions by introducing a transformation, including rotation and flipping, consistent scheme in our self-ensembling model. With only 300 labeled training samples, our method sets a new record on the benchmark of the International Skin Imaging Collaboration (ISIC) 2017 skin lesion segmentation challenge. Such a result clearly surpasses fully-supervised state-of-the-arts that are trained with 2000 labeled data.", "title": "" }, { "docid": "f5fab7443c0d42e5714893bf768d8279", "text": "Most of the authentication and digital signature protocols assume the existence of a trusted third party either as an authentication server or certification authority. However, such servers and authorities create both security and fault intolerance bottlenecks within the protocols. This problem can be solved by combining a secret sharing scheme with authentication and digital signature protocols. This paper describes the difficulties to combine a secret sharing scheme with the authentication and digital signature protocols and proposes a draft solution.", "title": "" }, { "docid": "735cc7f7b067175705cb605affd7f06e", "text": "This paper presents a design, simulation, implementation and measurement of a novel microstrip meander patch antenna for the application of sensor networks. The dimension of the microstrip chip antenna is 15 mm times 15 mm times 2 mm. The meander-type radiating patch is constructed on the upper layer of the 2 mm height substrate with 0.0 5 mm height metallic conduct lines. Because of using the very high relative permittivity substrate ( epsivr=90), the proposed antenna achieves 315 MHz band operations.", "title": "" }, { "docid": "7e08ddffc3a04c6dac886e14b7e93907", "text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.", "title": "" }, { "docid": "62f67cf8f628be029ce748121ff52c42", "text": "This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.", "title": "" }, { "docid": "b7c169cbb28d0cd640d85421735f132c", "text": "A switching regulator with quasi-V2 adaptive on-time (AOT) control, that provides a fast load transient response is proposed. A feed-forward path network allows the proposed switching regulator to achieve a fast transient response and stable operation without requiring an output capacitor with a large equivalent series resistance. The proposed AOT controller makes the switching frequency pseudo fixed in continuous conduction mode and works in pulse-frequency modulation mode under ultra-light load conditions. The AOT controller adjusts the on-time according to the supply voltage load current conditions. The measurement results verify that the switching regulator can operate under load currents of 5–800 mA for supply voltages of 3.3–4.2 V and an output voltage of 1.2 V. The recovery time is 3.6μs and the voltage drop is 55 mV when the load current is increased from 5 to 750 mA.", "title": "" }, { "docid": "f32ede03617159c0549b3475d9448096", "text": "Chatbots have rapidly become a mainstay in software development. A range of chatbots contribute regularly to the creation of actual production software. It is somewhat difficult, however, to precisely delineate hype from reality. Questions arise as to what distinguishes a chatbot from an ordinary software tool, what might be desirable properties of chatbots, and where their future may lie. This position paper introduces a starting framework through which we examine the current state of chatbots and identify directions for future work.", "title": "" }, { "docid": "3c00b42f66347e691a761c57011c919d", "text": "The most unexpected and intriguing result from functional brain imaging studies of cognitive aging is evidence for age-related overactivation: greater activation in older adults than in younger adults, even when performance is age-equivalent. Here we examine the hypothesis that age-related overactivation is compensatory and discuss the compensation-related utilization of neural circuits hypothesis (CRUNCH). We review evidence that favors a compensatory account, discuss questions about strategy differences, and consider the functions that may be served by overactive brain areas. Future research directed at neurocognitively informed training interventions may augment the potential for plasticity that persists into the later years of the human lifespan. KEYWORDS—plasticity; dedifferentiation; brain imaging; working memory Brain imaging has become a method of great importance for studying cognitive aging, which makes sense because the latter presumably results from neurobiological aging. Therefore, brain-based measurements that can be linked to cognitive processes expand the range of questions that can be addressed about the agingmind. The emerging answers have prompted new ways to think about the normal aging process and about functional brain organization across the lifespan. Before the advent of brain imaging, the behavioral methods and interpretive logic of clinical neuropsychology guided brain-based theories of cognitive aging. This approach assumes that minimal age differences in performance imply minimal alterations in underlying cognitive mechanisms and, by extension, age-invariance in the neural substrates that mediate them. In our assessment, one of themost far-reaching discoveries to have thus far emerged from brain imaging studies of aging is that this assumption is erroneous. The initial neuroimaging studies of cognitive aging, which measured brain activation via the distribution of a radioactive isotope (i.e., positron emission tomography, PET; Grady et al., 1994), noted that older adults display activation in regions that are not activated by younger adults performing the same tasks. In some studies, sites of overactivation co-occur with regions that are underactive relative to younger adults. In other studies, regions of overactivation are the only indication that older brains function differently than younger brains (for reviews, see Grady & Craik, 2000; Reuter-Lorenz, 2002). The terms overactivation and underactivation are purely relative, referring to sites that senior adults activate more or less, respectively, than their younger counterparts (Fig. 1). Overactivation is frequently observed in prefrontal sites (Cabeza et al., 2004; Reuter-Lorenz et al., 2000). Overactivation in seniors is often found in regions that approximately mirror active sites in younger adults but in the opposite hemisphere (e.g., Cabeza, 2002; Reuter-Lorenz et al., 2000; see the lower left panel of Fig. 1). This pattern of reduced asymmetry in older adults has been referred to as hemispheric asymmetry reduction in older age, or HAROLD for short (Cabeza, 2002). INTERPRETING OVERACTIVATION Age-related underactivation is typically interpreted as a sign of impairment due to poor or underutilized strategies or due to structural changes such as atrophy. However, the cognitive and neural mechanisms associated with age-specific regions of overactivation are more ambiguous. Determining whether overactivations are neural correlates of processes that are beneficial, detrimental, or inconsequential to cognitive function is the crux of many research efforts in the cognitive neuroscience of aging (Reuter-Lorenz & Lustig, 2005). Because overactivation has been found for a broad range of tasks, across a variety of brain regions, with or without age differences in performance, and with or without concurrent underactivation, it is highly unlikely that all instances stem from a single cause. Unsurprisingly, when overactivation is found in association with poor performance, it is interpreted as impairAddress correspondence toPatriciaA.Reuter-Lorenz,Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043; e-mail: parl@umich.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 17—Number 3 177 Copyright r 2008 Association for Psychological Science at UNIV OF MICHIGAN on May 12, 2015 cdp.sagepub.com Downloaded from ment and is typically attributed to any of several potentially related mechanisms: the use of multiple and/or inefficient cognitive strategies; disinhibition because communication between the left and right hemispheres declines; or dedifferentiation, whereby the specificity and selectivity of neural processors break down. In many studies, however, overactivation is accompanied by age-equivalent performance, raising the possibility that the additional activity serves a beneficial, compensatory function without which performance decrements would result (see Fig. 1). Regardless of whether performance matching is achieved by selecting younger and older subgroups that perform at equivalent levels, providing different amounts of training, adopting age-tailored stimulus parameters, or otherwise altering task demands for each age group, overactivation has been found consistently across perceptual, motoric, mnemonic, verbal, and spatial domains. The compensation hypothesis predicts that, even while performance is matched at the group level, overactivation across individuals should be correlated with higher performance in the older group. Although significant correlations may sometimes be lacking due to insufficient variability or a lack of statistical power, positive activation–performance correlations have been reported, lending support to the compensatory account of age-specific overactivations (Fig. 1; Cabeza et al., 2004; Reuter-Lorenz & Lustig, 2005). Establishing that overactive sites in older adults contribute to and are necessary for successful performance would provide especially strong support for the compensation hypothesis. Transcranial magnetic stimulation (TMS) is a technique that applies a series of focally directedmagnetic pulses to the scalp to stimulate the underlying neural tissue. TMS can be applied in either a deactivating or an activating mode. In the deactivating mode, TMS temporarily disrupts the underlying neural signals, producing a virtual, transient lesion. Using this mode, Rossi et al. (2005) showed that overactive sites in seniors contributed to performance success: Older adults, who typically show bilateral prefrontal activation during recognition memory, were impaired by TMS to either hemisphere, suggesting that recognition relies on both sides. Younger adults, who activate unilaterally during recognition memory, were impaired by TMS to only one side. When used in the activating mode, TMS increases the contribution of the underlying tissue. Another study found that, when TMS was applied prefrontally in the activating mode, a group of low-performing elderly showed improvement; furthermore, functional magnetic resonance imaging (fMRI) showed their brain activation to be unilateral before TMS and bilateral after TMS, in association with their improved performance (Sole-Padulles et al., 2006). COMPENSATION FORWHAT? The compensation hypothesis assumes that overactive sites in older adult brains are ‘‘working harder’’ than the corresponding regions in their younger counterparts. In the aging brain, a network may work harder, and thus overactivate, to make up either for its own declining efficiency or for processing deficiencies elsewhere in the brain. Although definitive support for the first possibility is currently lacking, such support could come from work using multiple measures to assess structural and functional integrity within the same subjects. For example, volumetric measures could reveal age-related atrophy in a region that also displays overactivation. When also coupled with preserved performance, such a pattern would suggest that increased recruitment compensates for decline (cf., Persson et al., 2006). Alternatively, a network may need to work harder and thus becomes overactive because the input it receives is degraded or compromised. By this account, overactivation is compensating for functional declines elsewhere.We see three types of evidence as being consistent with this possibility. First, several studies Fig. 1. Results typically referred to as ‘‘underactivation’’ (top) and ‘‘overactivation’’ (bottom). When older adults activate a brain region at lower levels or show a smaller extent of activation compared to younger adults, as illustrated in the top pair of images, the results are often interpreted to indicate that the older group is functionally deficient in the processing operations mediated by this region. The overactivation pattern in the bottom pair of images illustrates the hemispheric asymmetry reduction in older age (or HAROLD) effect: Younger adults show activation that is lateralized to the left hemisphere, whereas the older adults are activating homologous brain regions in the opposite hemisphere also. See Reuter-Lorenz and Lustig (2005) for examples of studies reporting these age-specific activation patterns. 178 Volume 17—Number 3 Neurocognitive Aging and the Compensation Hypothesis at UNIV OF MICHIGAN on May 12, 2015 cdp.sagepub.com Downloaded from report overactive sites accompanied by, and in some cases inversely correlated with, sites of underactivation (Reuter-Lorenz & Lustig, 2005). For example, in a study of incidental memory for complex scenes, Gutchess et al. (2005) compared the neural correlates of successfully remembered items to those of forgotten items in younger and older adults. Compared to the older group, successfulmemory in younger adults was associatedwith greater activation in medial temporal lobe (MTL) regions. In contrast, when older adults were successful, the prefrontal cortex was overactivated and was inversely correl", "title": "" }, { "docid": "ed6a97bacf21798dae3a19f318ebd53e", "text": "Functional magnetic resonance imaging (fMRI) is currently the mainstay of neuroimaging in cognitive neuroscience. Advances in scanner technology, image acquisition protocols, experimental design, and analysis methods promise to push forward fMRI from mere cartography to the true study of brain organization. However, fundamental questions concerning the interpretation of fMRI data abound, as the conclusions drawn often ignore the actual limitations of the methodology. Here I give an overview of the current state of fMRI, and draw on neuroimaging and physiological data to present the current understanding of the haemodynamic signals and the constraints they impose on neuroimaging data interpretation.", "title": "" }, { "docid": "c3cfe0205cb52faa34c38df800b4eade", "text": "The Dynamic Core and Global Workspace hypotheses were independently put forward to provide mechanistic and biologically plausible accounts of how brains generate conscious mental content. The Dynamic Core proposes that reentrant neural activity in the thalamocortical system gives rise to conscious experience. Global Workspace reconciles the limited capacity of momentary conscious content with the vast repertoire of long-term memory. In this paper we show the close relationship between the two hypotheses. This relationship allows for a strictly biological account of phenomenal experience and subjectivity that is consistent with mounting experimental evidence. We examine the constraints on causal analyses of consciousness and suggest that there is now sufficient evidence to consider the design and construction of a conscious artifact.", "title": "" } ]
scidocsrr
da3968bea9e56f122ae1c59688295a32
A Deep Neural Network Model for Target-based Sentiment Analysis
[ { "docid": "9e40ab33c1c9a69ddc24bf1083274a19", "text": "This paper presents a new method to identify sentiment of an aspect of an entity. It is an extension of RNN (Recursive Neural Network) that takes both dependency and constituent trees of a sentence into account. Results of an experiment show that our method significantly outperforms previous methods.", "title": "" } ]
[ { "docid": "7f19a1aa06bb21443992cb5283636d9f", "text": "Traceability is important in the food supply chain to ensure the consumerspsila food safety, especially for the fresh products. In recent years, many solutions which applied various emerging technology have been proposed to improve the traceability of fresh product. However, the traceability system needs to be customized to satisfy different requirements. The system depends on the different product properties and supply chain models. This paper proposed a RFID-enabled traceability system for live fish supply chain. The system architecture is designed according to the specific requirement gathered in the life fish processing. Likewise, it is adaptive for the small and medium enterprises. The RFID tag is put on each live fish and is regarded as the mediator which links the live fish logistic center, retail restaurants and consumers for identification. The sensors controlled by the PLC are used to collect the information in farming as well as the automatic transporting processes. The traceability information is designed to be exchanged and used on a Web-based system for farmers and consumers. The system was implemented and deployed in the live fish logistic center for trial, and the results are valuable for practical reference.", "title": "" }, { "docid": "f321ba1ee0f68612d7c463a37708a1e7", "text": "Non-orthogonal multiple access (NOMA) is a promising technique for the fifth generation mobile communication due to its high spectral efficiency. By applying superposition coding and successive interference cancellation techniques at the receiver, multiple users can be multiplexed on the same subchannel in NOMA systems. Previous works focus on subchannel assignment and power allocation to achieve the maximization of sum rate; however, the energy-efficient resource allocation problem has not been well studied for NOMA systems. In this paper, we aim to optimize subchannel assignment and power allocation to maximize the energy efficiency for the downlink NOMA network. Assuming perfect knowledge of the channel state information at base station, we propose a low-complexity suboptimal algorithm, which includes energy-efficient subchannel assignment and power proportional factors determination for subchannel multiplexed users. We also propose a novel power allocation across subchannels to further maximize energy efficiency. Since both optimization problems are non-convex, difference of convex programming is used to transform and approximate the original non-convex problems to convex optimization problems. Solutions to the resulting optimization problems can be obtained by solving the convex sub-problems iteratively. Simulation results show that the NOMA system equipped with the proposed algorithms yields much better sum rate and energy efficiency performance than the conventional orthogonal frequency division multiple access scheme.", "title": "" }, { "docid": "a979b0a02f2ade809c825b256b3c69d8", "text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.", "title": "" }, { "docid": "439f938d155d9ac44c1aa0981a7c7fe6", "text": "We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural network (CNN) and use its hidden features to define a feature perceptual loss for VAE training. Evaluated on the CelebA face dataset, we show that our model produces better results than other methods in the literature. We also show that our method can produce latent vectors that can capture the semantic information of face expressions and can be used to achieve state-of-the-art performance in facial attribute prediction.", "title": "" }, { "docid": "b0d5ec946a5c36500e3549779dc74329", "text": "Although several image quality measures have been proposed for fingerprints, no work has taken into account the differences among capture devices, and how these differences impact on the image quality. In this paper, several representative measures for assessing the quality fingerprint images are compared using an optical and a capacitive sensor. The capability to discriminate between images of different quality and its relationship with the verification performance is studied. We report differences depending on the sensor, and interesting relationships between sensor technology and features used for quality assessment are also pointed out.", "title": "" }, { "docid": "aca8b1efb729bdc45f5363cb663dba74", "text": "Along with the burst of open source projects, software theft (or plagiarism) has become a very serious threat to the healthiness of software industry. Software birthmark, which represents the unique characteristics of a program, can be used for software theft detection. We propose a system call dependence graph based software birthmark called SCDG birthmark, and examine how well it reflects unique behavioral characteristics of a program. To our knowledge, our detection system based on SCDG birthmark is the first one that is capable of detecting software component theft where only partial code is stolen. We demonstrate the strength of our birthmark against various evasion techniques, including those based on different compilers and different compiler optimization levels as well as two state-of-the-art obfuscation tools. Unlike the existing work that were evaluated through small or toy software, we also evaluate our birthmark on a set of large software. Our results show that SCDG birthmark is very practical and effective in detecting software theft that even adopts advanced evasion techniques.", "title": "" }, { "docid": "990c8e69811a8ebafd6e8c797b36349d", "text": "Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).", "title": "" }, { "docid": "d01321dc65ef31beedb6a92689ab91be", "text": "This paper proposes a content-constrained spatial (CCS) model to recover the mathematical layout (M-layout, or MLme) of an mathematical expression (ME) from its font setting layout (F-layout, or FLme). The M-layout can be used for content analysis applications such as ME based indexing and retrieval of documents. The first of the two-step process is to divide a compounded ME into blocks based on explicit mathematical structure primitives such as fraction lines, radical signs, fence, etc. Subscripts and superscripts within a block are resolved by probabilistic inference of their likelihood based on a global optimization model. The dual peak distributions of the features to capture the relative position between sibling blocks as super/subscript call for a sampling based non-parametric probability distribution estimation method to resolve their ambiguity. The notion of spatial constraint indicators is proposed to reduce the search space while improving the prediction performance. The proposed scheme is tested using the InftyCDB data set to achieve the F1 score of 0.98.", "title": "" }, { "docid": "d6c353e535ab936d96f821ddbf86bf47", "text": "Computer science meets every criterion for being a science, but it has a self-inflicted credibility problem.", "title": "" }, { "docid": "bc2bc8b2d9db3eb14e126c627248a66a", "text": "With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with enhanced and efficient methodologies and techniques. This paper aims to discuss the existing as well as improved testing techniques for the better quality assurance purposes.", "title": "" }, { "docid": "417307155547a565d03d3f9c2a235b2e", "text": "Recent deep learning based methods have achieved the state-of-the-art performance for handwritten Chinese character recognition (HCCR) by learning discriminative representations directly from raw data. Nevertheless, we believe that the long-and-well investigated domain-specific knowledge should still help to boost the performance of HCCR. By integrating the traditional normalization-cooperated direction-decomposed feature map (directMap) with the deep convolutional neural network (convNet), we are able to obtain new highest accuracies for both online and offline HCCR on the ICDAR-2013 competition database. With this new framework, we can eliminate the needs for data augmentation and model ensemble, which are widely used in other systems to achieve their best results. This makes our framework to be efficient and effective for both training and testing. Furthermore, although directMap+convNet can achieve the best results and surpass human-level performance, we show that writer adaptation in this case is still effective. A new adaptation layer is proposed to reduce the mismatch between training and test data on a particular source layer. The adaptation process can be efficiently and effectively implemented in an unsupervised manner. By adding the adaptation layer into the pre-trained convNet, it can adapt to the new handwriting styles of particular writers, and the recognition accuracy can be further improved consistently and significantly. This paper gives an overview and comparison of recent deep learning based approaches for HCCR, and also sets new benchmarks for both online and offline HCCR.", "title": "" }, { "docid": "0c0d0b6d4697b1a0fc454b995bcda79a", "text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.", "title": "" }, { "docid": "26a6ba8cba43ddfd3cac0c90750bf4ad", "text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.", "title": "" }, { "docid": "bd6ba64d14c8234e5ec2d07762a1165f", "text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.", "title": "" }, { "docid": "1aa5036ab014aa31845e0ff6363fd061", "text": "An improved sum-of-sinusoids simulation model is proposed for Rayleigh fading channels. The new model employs random initial phase, and conditional random Doppler frequency for all individual sinusoids. The second-order statistics of the new simulator match the desired ones exactly even if the number of sinusoids is a single-digit integer. Other key statistics of the new simulator approach the desired ones of Clarke's (1968) reference model as the number of sinusoids approaches infinity, while good convergence is achieved when the number of sinusoids is small. Moreover, the new simulator can be directly used to generate multiple uncorrelated fading waveforms; it is also pointed out that a class of 16 different simulators, which have identical statistical properties, can be developed for Rayleigh fading channels.", "title": "" }, { "docid": "9eab2aa7c4fbfadb5642b47dd08c2014", "text": "A class of matrices (H-matrices) is introduced which have the following properties. (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of almost linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but their truncations to the H-matrix format are again of almost linear complexity. (iv) The same statement holds for the inverse of an H-matrix. This paper is the first of a series and is devoted to the first introduction of the H-matrix concept. Two concret formats are described. The first one is the simplest possible. Nevertheless, it allows the exact inversion of tridiagonal matrices. The second one is able to approximate discrete integral operators. AMS Subject Classifications: 65F05, 65F30, 65F50.", "title": "" }, { "docid": "bd5d84f20dcf130ea8b8d621befcb0dd", "text": "The output of convolutional neural networks (CNNs) has been shown to be discontinuous which can make the CNN image classifier vulnerable to small well-tuned artificial perturbation. That is, images modified by conducting such alteration (i.e., adversarial perturbation) that make little difference to the human eyes can completely change the CNN classification results. In this paper, we propose a practical attack using differential evolution (DE) for generating effective adversarial perturbations. We comprehensively evaluate the effectiveness of different types of DEs for conducting the attack on different network structures. The proposed method only modifies five pixels (i.e., few-pixel attack), and it is a black-box attack which only requires the miracle feedback of the target CNN systems. The results show that under strict constraints which simultaneously control the number of pixels changed and overall perturbation strength, attacking can achieve 72.29%, 72.30%, and 61.28% non-targeted attack success rates, with 88.68%, 83.63%, and 73.07% confidence on average, on three common types of CNNs. The attack only requires modifying five pixels with 20.44, 14.28, and 22.98 pixel value distortion. Thus, we show that current deep neural networks are also vulnerable to such simpler black-box attacks even under very limited attack conditions.", "title": "" }, { "docid": "2f74dbe2b21f446018ecc26f399a67d2", "text": "There are hundreds of millions of tables in Web pages that contain useful information for many applications. Leveraging data within these tables is di cult because of the wide variety of structures, formats and data encoded in these tables. TabVec is an unsupervised method to embed tables into a vector space to support classi cation of tables into categories (entity, relational, matrix, list, and nondata) with minimal user intervention. TabVec deploys syntax and semantics of table cells, and embeds the structure of tables in a table vector space. This enables superior classi cation of tables even in the absence of domain annotations. Our evaluations in four real world domains show that TabVec improves classi cation accuracy by more than 20% compared to three state of the art systems, and that those systems require signi cant in domain training to achieve good results.", "title": "" }, { "docid": "69de2f8098a0618c75baeb259cb94ca1", "text": "Medicine may stand at the cusp of a mobile transformation. Mobile health, or “mHealth,” is the use of portable devices such as smartphones and tablets for medical purposes, including diagnosis, treatment, or support of general health and well-being. Users can interface with mobile devices through software applications (“apps”) that typically gather input from interactive questionnaires, separate medical devices connected to the mobile device, or functionalities of the device itself, such as its camera, motion sensor, or microphone. Apps may even process these data with the use of medical algorithms or calculators to generate customized diagnoses and treatment recommendations. Mobile devices make it possible to collect more granular patient data than can be collected from devices that are typically used in hospitals or physicians’ offices. The experiences of a single patient can then be measured against large data sets to provide timely recommendations about managing both acute symptoms and chronic conditions.1,2 To give but a few examples: One app allows users who have diabetes to plug glucometers into their iPhones as it tracks insulin doses and sends alerts for abnormally high or low blood sugar levels.3,4 Another app allows patients to use their smartphones to record electrocardiograms,5 using a single lead that snaps to the back of the phone. Users can hold the phone against their chests, record cardiac events, and transmit results to their cardiologists.6 An imaging app allows users to analyze diagnostic images in multiple modalities, including positronemission tomography, computed tomography, magnetic resonance imaging, and ultrasonography.7 An even greater number of mHealth products perform health-management functions, such as medication reminders and symptom checkers, or administrative functions, such as patient scheduling and billing. The volume and variety of mHealth products are already immense and defy any strict taxonomy. More than 97,000 mHealth apps were available as of March 2013, according to one estimate.8 The number of mHealth apps, downloads, and users almost doubles every year.9 Some observers predict that by 2018 there could be 1.7 billion mHealth users worldwide.8 Thus, mHealth technologies could have a profound effect on patient care. However, mHealth has also become a challenge for the Food and Drug Administration (FDA), the regulator responsible for ensuring that medical devices are safe and effective. The FDA’s oversight of mHealth devices has been controversial to members of Congress and industry,10 who worry that “applying a complex regulatory framework could inhibit future growth and innovation in this promising market.”11 But such oversight has become increasingly important. A bewildering array of mHealth products can make it difficult for individual patients or physicians to evaluate their quality or utility. In recent years, a number of bills have been proposed in Congress to change FDA jurisdiction over mHealth products, and in April 2014, a key federal advisory committee laid out its recommendations for regulating mHealth and other health-information technologies.12 With momentum toward legislation building, this article focuses on the public health benefits and risks of mHealth devices under FDA jurisdiction and considers how to best use the FDA’s authority.", "title": "" }, { "docid": "e14420212ec11882cc71a57fd68cbb08", "text": "Organizational ambidexterity refers to the ability of an organization to both explore and exploit—to compete in mature technologies and markets where efficiency, control, and incremental improvement are prized and to also compete in new technologies and markets where flexibility, autonomy, and experimentation are needed. In the past 15 years there has been an explosion of interest and research on this topic. We briefly review the current state of the research, highlighting what we know and don’t know about the topic. We close with a point of view on promising areas for ongoing research.", "title": "" } ]
scidocsrr
165bdfb05398927db2de0843efe6ef22
IMPACT OF EXPOSURE TO VIOLENCE IN SCHOOL ON CHILD AND ADOLESCENT MENTAL HEALTH AND BEHAVIOR By :
[ { "docid": "32b5458ced294a01654f3747273db08d", "text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).", "title": "" } ]
[ { "docid": "270e17a746f738bf5ad4ffd7cafeac5a", "text": "How can we design reinforcement learning agents that avoid causing unnecessary disruptions to their environment? We argue that current approaches to penalizing side effects can introduce bad incentives in tasks that require irreversible actions, and in environments that contain sources of change other than the agent. For example, some approaches give the agent an incentive to prevent any irreversible changes in the environment, including the actions of other agents. We introduce a general definition of side effects, based on relative reachability of states compared to a default state, that avoids these undesirable incentives. Using a set of gridworld experiments illustrating relevant scenarios, we empirically compare relative reachability to penalties based on existing definitions and show that it is the only penalty among those tested that produces the desired behavior in all the scenarios.", "title": "" }, { "docid": "4f3f3873e8eb89f0665fbeb456fbf477", "text": "STUDY DESIGN\nControlled laboratory study.\n\n\nOBJECTIVES\nTo clarify whether differences in surface stability influence trunk muscle activity.\n\n\nBACKGROUND\nLumbar stabilization exercises on unstable surfaces are performed widely. One perceived advantage in performing stabilization exercises on unstable surfaces is the potential for increased muscular demand. However, there is little evidence in the literature to help establish whether this assumption is correct.\n\n\nMETHODS\nNine healthy male subjects performed lumbar stabilization exercises. Pairs of intramuscular fine-wire or surface electrodes were used to record the electromyographic signal amplitude of the rectus abdominis, the external obliques, the transversus abdominis, the erector spinae, and lumbar multifidus. Five exercises were performed on the floor and on an unstable surface: elbow-toe, hand-knee, curl-up, side bridge, and back bridge. The EMG data were normalized as the percentage of the maximum voluntary contraction, and data between doing each exercise on the stable versus unstable surface were compared using a Wilcoxon signed-rank test.\n\n\nRESULTS\nWith the elbow-toe exercise, the activity level for all muscles was enhanced when performed on the unstable surface. When performing the hand-knee and side bridge exercises, activity level of the more global muscles was enhanced when performed on an unstable surface. Performing the curl-up exercise on an unstable surface, increased the activity of the external obliques but reduced transversus abdominis activation.\n\n\nCONCLUSION\nThis study indicates that lumbar stabilization exercises on an unstable surface enhanced the activities of trunk muscles, except for the back bridge exercise.", "title": "" }, { "docid": "499a37563d171054ad0b0d6b8f7007bf", "text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "768e9846a82567a5f29f653f1a86f0d1", "text": "In SDN, forwarding rules are frequently updated to adapt to network dynamics. During the procedure, path consistency needs to be preserved; otherwise, in-flight packets might meet with forwarding errors such as loops and black holes. Despite a large number of suggestions have been proposed, they take either a long duration or have high rule-space overheads, thus fail to be practical for large-scale high dynamic networks. In this paper, we propose FLUS, a Segment Routing (SR) based mechanism, to achieve fast and lightweight path updates. Basically, when a route needs a change, FLUS instantly employs SR to construct its desired new path by concatenating some fragments of the already existing paths. After the actual paths are established, FLUS then shifts incoming packets to them and disables the transitional ones. Such a design helps packets enjoy their new paths immediately without introducing rule-space overheads. This paper presents FLUS's segment allocation, path construction, and the corresponding optimal algorithms in detail. Our evaluation based on real and synthesized networks shows: FLUS can handle up to 92-100% updates using SR in real-time and save 72-88% rule overhead compared to prior methods.", "title": "" }, { "docid": "10fe57a315eac9f62698b87f84f8222b", "text": "This study presented a prestressed soft gripper fabricated with 3-D printing technology. The gripper can realize a large contact area while grasping and simultaneously generate large initial opening without deflating the soft actuators. The soft actuator was 3-D printed as two separate parts: the soft chambers with a rigid connector and a cover to seal the chambers. The chamber part was stretched longitudinally and sealed by gluing the cover onto it. The actuator was then released, and an initial curl occurred due to the remaining prestress. Finite element (FE) simulations were performed to validate this concept and the designed structure. Actuator fabrication and experimental tests were presented, and agreements between the FE simulations and test results were achieved. A gripper consisting of four prestressed actuators was constructed and experimentally tested by picking-and-placing food materials in different weights and different sized containers. To adapt to objects of different sizes and shapes, the gripper base was designed to have two configurations and two openings. The results showed that the prestressed gripper could stably handle various types of food and still remain compact with a simple supporting system.", "title": "" }, { "docid": "02246f67af201eb7cfb5536246c089c0", "text": "With the emergence of web-based social and information applications, entity similarity search in information networks, aiming to find entities with high similarity to a given query entity, has gained wide attention. However, due to the diverse semantic meanings in heterogeneous information networks, which contain multi-typed entities and relationships, similarity measurement can be ambiguous without context. In this paper, we investigate entity similarity search and the resulting ambiguity problems in heterogeneous information networks. We propose to use a meta-path-based ranking model ensemble to represent semantic meanings for similarity queries, exploit the possibility of using using user-guidance to understand users query. Experiments on real-world datasets show that our framework significantly outperforms competitor methods.", "title": "" }, { "docid": "4d0a51bbc27ff1625d0b2d50f072526f", "text": "Realistic image forgeries involve a combination of splicing, resampling, cloning, region removal and other methods. While resampling detection algorithms are effective in detecting splicing and resampling, copy-move detection algorithms excel in detecting cloning and region removal. In this paper, we combine these complementary approaches in a way that boosts the overall accuracy of image manipulation detection. We use the copy-move detection method as a pre-filtering step and pass those images that are classified as untampered to a deep learning based resampling detection framework. Experimental results on various datasets including the 2017 NIST Nimble Challenge Evaluation dataset comprising nearly 10,000 pristine and tampered images shows that there is a consistent increase of 8%-10% in detection rates, when copy-move algorithm is combined with different resampling detection algorithms. Introduction Fake images are becoming a growing threat to information reliability. With the ubiquitous availability of various powerful image editing software tools and smartphone apps such as Photoshop, GIMP, Snapseed and Pixlr, it has become very trivial to manipulate digital images. The field of Digital Image Forensics aims to develop tools that can identify the authenticity of digital images and localize regions in an image which have been tampered with. There are many types of image forgeries such as splicing objects from one image to another, removing objects or regions from images, creating copies of objects in the same image, and more. To detect these forgeries, researchers have proposed methods based on several techniques such as JPEG compression artifacts, resampling detection, lighting artifacts, noise inconsistencies, camera sensor noise, and many more. However, most techniques in literature focus on a specific type of manipulation or a groups of similar tamper operations. In realistic scenarios, a host of operations are applied when creating tampered images. For example, when an object is spliced onto an image, it is often accompanied by other operations such as scaling, rotation, smoothing, contrast enhancement, and more. Very few studies address these challenging scenarios with the aid of Image Forensics challenges and competitions such as IEEE Image Forensics challenge [1] and the recent NIST Nimble Media Forensics challenge [2]. These competitions try to mimic a realistic scenario and contain a large number of doctored images which involves several types of image manipulations. In order to detect the tampered images, a single detection method will not be sufficient to identify the different types of manipulations. In this paper, we demonstrate the importance of combining forgery detection algorithms, especially when the features are complementary, to boost the image manipulation detection rates. We propose a simple method to identify realistic forgeries by fusing two complementary approaches: resampling detection and copy-move detection. Our experimental results show the approach is promising and achieves an increase in detection rates. Image forgeries are usually created by splicing a portion of an image onto some other image. In the case of splicing or object removal, the tampered region is often scaled or rotated to make it proportional to the neighboring untampered area. This creates resampling of the image grid and detection of resampling indicates evidence of image manipulation. Several techniques have been proposed to detect resampling in digital images [3, 4, 5, 6, 7, 8, 9]. Similarly, copy-move forgeries are common, where a part of the image is copied and pasted on another part generally to conceal unwanted portions of the image. Detection of these copied parts indicates evidence of tampering [10, 11, 12, 13, 14, 15, 16, 17]. In this paper, we combine our previous work on resampling forgery detection [18] with a dense-field based copy-move forgery detection method developed by Cozzolino et al. [16] to assign a manipulation confidence score. We demonstrate that our algorithm is effective at detecting many different types of image tampering that can be used to verify the authenticity of digital images. In [18], we designed a detector based on Radon transform and deep learning. The detector found image artifacts imposed by classic upsampling, downsampling, clockwise and counter clockwise rotations, and shearing methods. We combined these five different resampling detectors with a JPEG compression detector and for each of the six detectors we output a heatmap which indicates the regions of resampling anomalies. The generated heatmaps were smoothed to localize the detection and determine the detection score. In this work, we combine the above approach with a copy-move forgery detector [16]. Our experiments demonstrate that the resampling features are complementary to the copyar X iv :1 80 2. 03 15 4v 2 [ cs .C V ] 1 9 Fe b 20 18", "title": "" }, { "docid": "ef8c4dfa058106cc38d9497c2713b5c6", "text": "Of current interest are the causal attributions offered by depressives for the good and bad events in their lives. One important attributional account of depression is the reformulated learned helplessness model, which proposes that depressive symptoms are associated with an attributional style in which uncontrollable bad events are attributed to internal (versus external), stable (versus unstable), and global (versus specific) causes. We describe the Attributional Style Questionnaire, which measures individual differences in the use of these attributional dimensions. We report means, reliabilities, intercorrelations, and test-retest stabilities for a sample of 130 undergraduates. Evidence for the questionnaire's validity is discussed. The Attributional Style Questionnaire promises to be a reliable and valid instrument.", "title": "" }, { "docid": "9e9be149fc44552b6ac9eb2d90d4a4ba", "text": "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.", "title": "" }, { "docid": "aa3be1c132e741d2c945213cfb0d96ad", "text": "Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.", "title": "" }, { "docid": "cff44da2e1038c8e5707cdde37bc5461", "text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "title": "" }, { "docid": "33cce2750db6e1f680e8a6a2c89ad30a", "text": "Present theories of visual recognition emphasize the role of interactive processing across populations of neurons within a given network, but the nature of these interactions remains unresolved. In particular, data describing the sufficiency of feedforward algorithms for conscious vision and studies revealing the functional relevance of feedback connections to the striate cortex seem to offer contradictory accounts of visual information processing. TMS is a good method to experimentally address this issue, given its excellent temporal resolution and its capacity to establish causal relations between brain function and behavior. We studied 20 healthy volunteers in a visual recognition task. Subjects were briefly presented with images of animals (birds or mammals) in natural scenes and were asked to indicate the animal category. MRI-guided stereotaxic single TMS pulses were used to transiently disrupt striate cortex function at different times after image onset (SOA). Visual recognition was significantly impaired when TMS was applied over the occipital pole at SOAs of 100 and 220 msec. The first interval has consistently been described in previous TMS studies and is explained as the interruption of the feedforward volley of activity. Given the late latency and discrete nature of the second peak, we hypothesize that it represents the disruption of a feedback projection to V1, probably from other areas in the visual network. These results provide causal evidence for the necessity of recurrent interactive processing, through feedforward and feedback connections, in visual recognition of natural complex images.", "title": "" }, { "docid": "6c8445b5fec9022a968d3551efb8972b", "text": "Face Recognition by a robot or machine is one of the challenging research topics in the recent years. It has become an active research area which crosscuts several disciplines such as image processing, pattern recognition, computer vision, neural networks and robotics. For many applications, the performances of face recognition systems in controlled environments have achieved a satisfactory level. However, there are still some challenging issues to address in face recognition under uncontrolled conditions. The variation in illumination is one of the main challenging problems that a practical face recognition system needs to deal with. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals (Adini et al., 1997). Various methods have been proposed to solve the problem. These methods can be classified into three categories, named face and illumination modeling, illumination invariant feature extraction and preprocessing and normalization. In this chapter, an extensive and state-of-the-art study of existing approaches to handle illumination variations is presented. Several latest and representative approaches of each category are presented in detail, as well as the comparisons between them. Moreover, to deal with complex environment where illumination variations are coupled with other problems such as pose and expression variations, a good feature representation of human face should not only be illumination invariant, but also robust enough against pose and expression variations. Local binary pattern (LBP) is such a local texture descriptor. In this chapter, a detailed study of the LBP and its several important extensions is carried out, as well as its various combinations with other techniques to handle illumination invariant face recognition under a complex environment. By generalizing different strategies in handling illumination variations and evaluating their performances, several promising directions for future research have been suggested. This chapter is organized as follows. Several famous methods of face and illumination modeling are introduced in Section 2. In Section 3, latest and representative approaches of illumination invariant feature extraction are presented in detail. More attentions are paid on quotient-image-based methods. In Section 4, the normalization methods on discarding low frequency coefficients in various transformed domains are introduced with details. In Section 5, a detailed introduction of the LBP and its several important extensions is presented, as well as its various combinations with other face recognition techniques. In Section 6, comparisons between different methods and discussion of their advantages and disadvantages are presented. Finally, several promising directions as the conclusions are drawn in Section 7.", "title": "" }, { "docid": "36ab3ee6219943e0af69995e4380f5ae", "text": "This paper introduces context digests, high-dimensional real-valued representations for the typical left and right contexts of a word. Initial entries for the context digests are formed from the word’s close left and right neighbors. A singular value decomposition reduces the dimensionality of the space to enable subsequent efficient processing. In contrast to similar techniques, no preprocessor such as a parser is required. Context digests summarize both syntagmatic and paradigmatic relations between words: how typical they are as neighbors and how well they are substitutable for each other. We apply context digests to identifying collocations, to assessing the similarity of the arguments of different verbs, and to clustering occurrences of adjectives and verbs according to the words they modify in context.", "title": "" }, { "docid": "429eea5acf13bd4e19b4f34ef4c79fe7", "text": "We present a study where human neurophysiological signals are used as implicit feedback to alter the behavior of a deep learning based autonomous driving agent in a simulated virtual environment.", "title": "" }, { "docid": "a57aa7ff68f7259a9d9d4d969e603dcd", "text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.", "title": "" }, { "docid": "6b5950c88c8cb414a124e74e9bc2ed00", "text": "As most regular readers of this TRANSACTIONS know, the development of digital signal processing techniques for applications involving image or picture data has been an increasingly active research area for the past decade. Collectively, t h s work is normally characterized under the generic heading “digital image processing.” Interestingly, the two books under review here share this heading as their title. Both are quite ambitious undertakings in that they attempt to integrate contributions from many disciplines (classical systems theory, digital signal processing, computer science, statistical communications, etc.) into unified, comprehensive presentations. In this regard it can be said that both are to some extent successful, although in quite different ways. Why the unusual step of a joint review? A brief overview of the two books reveals that they share not only a common title, but also similar objectives/purposes, intended audiences, structural organizations, and lists of topics considered. A more careful study reveals that substantial differences do exist, however, in the style and depth of subject treatment (as reflected in the difference in their lengths). Given their almost simultaneous publication, it seems appropriate to discuss these similarities/differences in a common setting. After much forethought (and two drafts), the reviewer decided to structure this review by describing the general topical material in their (joint) major sections, with supplementary comments directed toward the individual texts. It is hoped that this will provide the reader with a brief survey of the books’ contents and some flavor of their contrasting approaches. To avoid the identity problems of the joint title, each book will be subsequently referred to using the respective authors’ names: Gonzalez/Wintz and Pratt. Subjects will be correlated with chapter number(s) and approximate l ngth of coverage.", "title": "" }, { "docid": "13b8913735e970b824b4fbcfd389cb1a", "text": "LLC series resonant converters for consumer or industrial electronics frequently encounter considerable changes in both input voltage and load current requirements. This paper presents theoretical and practical details involved with the dynamic analysis and control design of LLC series resonant dc-to-dc converters operating with wide input and load variations. The accuracy of dynamic analysis and validity of control design are confirmed with both computer simulations and experimental measurements.", "title": "" }, { "docid": "ada767dd9d0b01c26d8c1e8b461d2b9d", "text": "Representation sharing can reduce the memory footprint of a program by sharing one representation between duplicate terms. The most common implementation of representation sharing in functional programming systems is known as hash-consing. In the contex t of Prolog, representation sharing has been given little attention. Some current techniques th at deal with representation sharing are reviewed. The new contributions are: (1) an easy implementa tio of input sharingfor findall/3; (2) a description of asharermodule that introduces representation sharing at runtime. Th ir realization is shown in the context of the WAM as implemented by hProlog. B oth can be adapted to any WAMlike Prolog implementation. The sharer works independentl y of the garbage collector, but it can be made to cooperate with the garbage collector. Benchmark res ults how that the sharer has a cost comparable to the heap garbage collector, that its effectiv ness is highly application dependent, and that its policy must be tuned to the collector.", "title": "" } ]
scidocsrr
0ff3d0a8db58f8ad6be35e0e2f1aca60
Is Faster R-CNN Doing Well for Pedestrian Detection?
[ { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" } ]
[ { "docid": "b5009853d22801517431f46683b235c2", "text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.", "title": "" }, { "docid": "260c12152d9bd38bd0fde005e0394e17", "text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.", "title": "" }, { "docid": "71a65ff432ae4b53085ca5c923c29a95", "text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.", "title": "" }, { "docid": "ee3b2a97f01920ccbc653f4833820ca0", "text": "Notwithstanding many years of progress, pedestrian recognition is still a difficult but important problem. We present a novel multilevel Mixture-of-Experts approach to combine information from multiple features and cues with the objective of improved pedestrian classification. On pose-level, shape cues based on Chamfer shape matching provide sample-dependent priors for a certain pedestrian view. On modality-level, we represent each data sample in terms of image intensity, (dense) depth, and (dense) flow. On feature-level, we consider histograms of oriented gradients (HOG) and local binary patterns (LBP). Multilayer perceptrons (MLP) and linear support vector machines (linSVM) are used as expert classifiers. Experiments are performed on a unique real-world multi-modality dataset captured from a moving vehicle in urban traffic. This dataset has been made public for research purposes. Our results show a significant performance boost of up to a factor of 42 in reduction of false positives at constant detection rates of our approach compared to a baseline intensity-only HOG/linSVM approach.", "title": "" }, { "docid": "251bf66c8f742ceafc91ef92dc28085b", "text": "Recently, Altug and Wagner [1] posed a question regarding the optimal behavior of the probability of error when channel coding rate converges to the capacity sufficiently slowly. They gave a sufficient condition for the discrete memoryless channel (DMC) to satisfy a moderate deviation property (MDP) with the constant equal to the channel dispersion. Their sufficient condition excludes some practically interesting channels, such as the binary erasure channel and the Z-channel. We extend their result in two directions. First, we show that a DMC satisfies MDP if and only if its channel dispersion is nonzero. Second, we prove that the AWGN channel also satisfies MDP with a constant equal to the channel dispersion. While the methods used by Altug and Wagner are based on the method of types and other DMC-specific ideas, our proofs (in both achievability and converse parts) rely on the tools from our recent work [2] on finite-blocklength regime that are equally applicable to non-discrete channels and channels with memory.", "title": "" }, { "docid": "44bbc67f44f4f516db97b317ae16a22a", "text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.", "title": "" }, { "docid": "781fbf087201e480899f8bfb7e0e1838", "text": "The term \"Ehlers-Danlos syndrome\" (EDS) groups together an increasing number of heritable connective tissue disorders mainly featuring joint hypermobility and related complications, dermal dysplasia with abnormal skin texture and repair, and variable range of the hollow organ and vascular dysfunctions. Although the nervous system is not considered a primary target of the underlying molecular defect, recently, increasing attention has been posed on neurological manifestations of EDSs, such as musculoskeletal pain, fatigue, headache, muscle weakness and paresthesias. Here, a comprehensive overview of neurological findings of these conditions is presented primarily intended for the clinical neurologist. Features are organized under various subheadings, including pain, fatigue, headache, stroke and cerebrovascular disease, brain and spine structural anomalies, epilepsy, muscular findings, neuropathy and developmental features. The emerging picture defines a wide spectrum of neurological manifestations that are unexpectedly common and potentially disabling. Their evaluation and correct interpretation by the clinical neurologist is crucial for avoiding superfluous investigations, wrong therapies, and inappropriate referral. A set of basic tools for patient's recognition is offered for raising awareness among neurologists on this underdiagnosed group of hereditary disorders.", "title": "" }, { "docid": "f51d5eb0e569606aa4fc9a87521dfd9f", "text": "This article proposes LA-LDA, a location-aware probabilistic generative model that exploits location-based ratings to model user profiles and produce recommendations. Most of the existing recommendation models do not consider the spatial information of users or items; however, LA-LDA supports three classes of location-based ratings, namely spatial user ratings for nonspatial items, nonspatial user ratings for spatial items, and spatial user ratings for spatial items. LA-LDA consists of two components, ULA-LDA and ILA-LDA, which are designed to take into account user and item location information, respectively. The component ULA-LDA explicitly incorporates and quantifies the influence from local public preferences to produce recommendations by considering user home locations, whereas the component ILA-LDA recommends items that are closer in both taste and travel distance to the querying users by capturing item co-occurrence patterns, as well as item location co-occurrence patterns. The two components of LA-LDA can be applied either separately or collectively, depending on the available types of location-based ratings. To demonstrate the applicability and flexibility of the LA-LDA model, we deploy it to both top-k recommendation and cold start recommendation scenarios. Experimental evidence on large-scale real-world data, including the data from Gowalla (a location-based social network), DoubanEvent (an event-based social network), and MovieLens (a movie recommendation system), reveal that LA-LDA models user profiles more accurately by outperforming existing recommendation models for top-k recommendation and the cold start problem.", "title": "" }, { "docid": "7790f5dc699dc264d7be6f7376597867", "text": "The CNN-encoding of features from entire videos for the representation of human actions has rarely been addressed. Instead, CNN work has focused on approaches to fuse spatial and temporal networks, but these were typically limited to processing shorter sequences. We present a new video representation, called temporal linear encoding (TLE) and embedded inside of CNNs as a new layer, which captures the appearance and motion throughout entire videos. It encodes this aggregated information into a robust video feature representation, via end-to-end learning. Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space, (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification, and (c) they model feature interactions in a more expressive way and without loss of information. We conduct experiments on two challenging human action datasets: HMDB51 and UCF101. The experiments show that TLE outperforms current state-of-the-art methods on both datasets.", "title": "" }, { "docid": "6e878dbb176ea3a18190a8ab8177425a", "text": "We present a new computing machine, called an active element machine (AEM), and the AEM programming language. This computing model is motivated by the positive aspects of dendritic integration, inspired by biology, and traditional programming languages based on the register machine. Distinct from the traditional register machine, the fundamental computing elements – active elements – compute simultaneously. Distinct from traditional programming languages, all active element commands have an explicit reference to time. These attributes make the AEM an inherently parallel machine and enable the AEM to change its architecture (program) as it is executing its program.", "title": "" }, { "docid": "631cd44345606641454e9353e071f2c5", "text": "Microblogs are rich sources of information because they provide platforms for users to share their thoughts, news, information, activities, and so on. Twitter is one of the most popular microblogs. Twitter users often use hashtags to mark specific topics and to link them with related tweets. In this study, we investigate the relationship between the music listening behaviors of Twitter users and a popular music ranking service by comparing information extracted from tweets with music-related hashtags and the Billboard chart. We collect users' music listening behavior from Twitter using music-related hashtags (e.g., #nowplaying). We then build a predictive model to forecast the Billboard rankings and hit music. The results show that the numbers of daily tweets about a specific song and artist can be effectively used to predict Billboard rankings and hits. This research suggests that users' music listening behavior on Twitter is highly correlated with general music trends and could play an important role in understanding consumers' music consumption patterns. In addition, we believe that Twitter users' music listening behavior can be applied in the field of Music Information Retrieval (MIR).", "title": "" }, { "docid": "26d7cf1e760e9e443f33ebd3554315b6", "text": "The arrival of a multinational corporation often looks like a death sentence to local companies in an emerging market. After all, how can they compete in the face of the vast financial and technological resources, the seasoned management, and the powerful brands of, say, a Compaq or a Johnson & Johnson? But local companies often have more options than they might think, say the authors. Those options vary, depending on the strength of globalization pressures in an industry and the nature of a company's competitive assets. In the worst case, when globalization pressures are strong and a company has no competitive assets that it can transfer to other countries, it needs to retreat to a locally oriented link within the value chain. But if globalization pressures are weak, the company may be able to defend its market share by leveraging the advantages it enjoys in its home market. Many companies in emerging markets have assets that can work well in other countries. Those that operate in industries where the pressures to globalize are weak may be able to extend their success to a limited number of other markets that are similar to their home base. And those operating in global markets may be able to contend head-on with multinational rivals. By better understanding the relationship between their company's assets and the industry they operate in, executives from emerging markets can gain a clearer picture of the options they really have when multinationals come to stay.", "title": "" }, { "docid": "c04dd7ccb0426ef5d44f0420d321904d", "text": "In this paper, we introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture temporal structure in continuous activity videos. Our layer is designed to allow the model to learn a latent hierarchy of sub-event intervals. Our approach is fully differentiable while relying on a significantly less number of parameters, enabling its end-to-end training with standard backpropagation. We present our convolutional video models with multiple TGM layers for activity detection. Our experiments on multiple datasets including Charades and MultiTHUMOS confirm the benefit of our TGM layers, illustrating that it outperforms other models and temporal convolutions.", "title": "" }, { "docid": "4e8c39eaa7444158a79573481b80a77f", "text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.", "title": "" }, { "docid": "328c1c6ed9e38a851c6e4fd3ab71c0f8", "text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.", "title": "" }, { "docid": "d593c18bf87daa906f83d5ff718bdfd0", "text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction", "title": "" }, { "docid": "3b06bc2d72e0ae7fa75873ed70e23fc3", "text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.", "title": "" }, { "docid": "ceef658faa94ad655521ece5ac5cba1d", "text": "We propose learning a semantic visual feature representation by training a neural network supervised solely by point and object trajectories in video sequences. Currently, the predominant paradigm for learning visual features involves training deep convolutional networks on an image classification task using very large human-annotated datasets, e.g. ImageNet. Though effective as supervision, semantic image labels are costly to obtain. On the other hand, under high enough frame rates, frame-to-frame associations between the same 3D physical point or an object can be established automatically. By transitivity, such associations grouped into tracks can relate object/point appearance across large changes in pose, illumination and camera viewpoint, providing a rich source of invariance that can be used for training. We train a siamese network we call it AssociationNet to discriminate between correct and wrong associations between patches in different frames of a video sequence. We show that AssociationNet learns useful features when used as pretraining for object recognition in static images, and outperforms random weight initialization and alternative pretraining methods.", "title": "" }, { "docid": "6616607ee5a856a391131c5e2745bc79", "text": "Project management (PM) landscaping is continually changing in the IT industry. Working with the small teams and often with the limited budgets, while facing frequent changes in the business requirements, project managers are under continuous pressure to deliver fast turnarounds. Following the demands of the IT project management, leaders in this industry are optimizing and adopting different and new more effective styles and strategies. This paper proposes a new hybrid way of managing IT projects, flexibly combining the traditional and the Agile method. Also, it investigates what is the necessary organizational transition in an IT company, required before converting from the traditional to the proposed new hybrid method.", "title": "" }, { "docid": "bb6737c84b0d96896c82abefee876858", "text": "This paper introduces a novel tactile sensor with the ability to detect objects in the sensor's near proximity. For both tasks, the same capacitive sensing principle is used. The tactile part of the sensor provides a tactile sensor array enabling the sensor to gather pressure profiles of the mechanical contact area. Several tactile sensors have been developed in the past. These sensors lack the capability of detecting objects in their near proximity before a mechanical contact occurs. Therefore, we developed a tactile proximity sensor, which is able to measure the current flowing out of or even into the sensor. Measuring these currents and the exciting voltage makes a calculation of the capacitance coupled to the sensor's surface and, using more sensors of this type, the change of capacitance between the sensors possible. The sensor's mechanical design, the analog/digital signal processing and the hardware efficient demodulator structure, implemented on a FPGA, will be discussed in detail.", "title": "" } ]
scidocsrr
a1e54fec5a673226b41a7f7108d8e716
Webly-Supervised Video Recognition by Mutually Voting for Relevant Web Images and Web Video Frames
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" } ]
[ { "docid": "67adb7fcdf7f1171ea2056c6c8cb81b0", "text": "Today advanced computer vision (CV) systems of ever increasing complexity are being deployed in a growing number of application scenarios with strong real-time and power constraints. Current trends in CV clearly show a rise of neural network-based algorithms, which have recently broken many object detection and localization records. These approaches are very flexible and can be used to tackle many different challenges by only changing their parameters. In this paper, we present the first convolutional network accelerator which is scalable to network sizes that are currently only handled by workstation GPUs, but remains within the power envelope of embedded systems. The architecture has been implemented on 3.09 mm2 core area in UMC 65 nm technology, capable of a throughput of 274 GOp/s at 369 GOp/s/W with an external memory bandwidth of just 525 MB/s full-duplex \" a decrease of more than 90% from previous work.", "title": "" }, { "docid": "c7b58a4ebb65607d1545d3bc506c2fed", "text": "The goal of this study was to examine the relationship of self-efficacy, social support, and coping strategies with stress levels of university students. Seventy-five Education students completed four questionnaires assessing these variables. Significant correlations were found for stress with total number of coping strategies and the use of avoidance-focused coping strategies. As well, there was a significant correlation between social support from friends and emotion-focused coping strategies. Gender differences were found, with women reporting more social support from friends than men. Implications of these results for counselling university students are discussed.", "title": "" }, { "docid": "0b9ed15b4aaefb22aa8f0bb2b6c8fa00", "text": "Most existing Multi-View Stereo (MVS) algorithms employ the image matching method using Normalized Cross-Correlation (NCC) to estimate the depth of an object. The accuracy of the estimated depth depends on the step size of the depth in NCC-based window matching. The step size of the depth must be small for accurate 3D reconstruction, while the small step significantly increases computational cost. To improve the accuracy of depth estimation and reduce the computational cost, this paper proposes an efficient image matching method for MVS. The proposed method is based on Phase-Only Correlation (POC), which is a high-accuracy image matching technique using the phase components in Fourier transforms. The advantages of using POC are (i) the correlation function is obtained only by one window matching and (ii) the accurate sub-pixel displacement between two matching windows can be estimated by fitting the analytical correlation peak model of the POC function. Thus, using POC-based window matching for MVS makes it possible to estimate depth accurately from the correlation function obtained only by one window matching. Through a set of experiments using the public MVS datasets, we demonstrate that the proposed method performs better in terms of accuracy and computational cost than the conventional method.", "title": "" }, { "docid": "b19d07111fa50af51e0e0ef343dd6a50", "text": "We describe a technique for isolating mycorrhizal fungi from roots of orchids. This technique involves selection and treatment of roots, preparation of pelotons, treatment of pelotons, culture of pelotons so that fungal hyphae grow out and strain purification. The technique is considered better because 1) problems of fungal and bacterial contamination are resolved, 2) endophytic bacteria are suppressed and also used to promote hyphal growth from the pelotons, 3) live and dead pelotons, and those from which fungi are culturable or unculturable can easily be identified, providing increased isolation efficiency, 4) a single taxon can be isolated from a single peloton containing several mycorrhizal taxa, 5) slow-growing mycorrhizal taxa can easily be isolated. The implications and potential use of this technique in future studies is discussed.", "title": "" }, { "docid": "714515b82c7411550ffd1aa00acde62f", "text": "This paper presents a vision guidance approach using an image-based visual servo (IBVS) for an aerial manipulator combining a multirotor with a multidegree of freedom robotic arm. To take into account the dynamic characteristics of the combined manipulation platform, the kinematic and dynamic models of the combined system are derived. Based on the combined model, a passivity-based adaptive controller which can be applied on both position and velocity control is designed. The position control is utilized for waypoint tracking such as taking off and landing, and the velocity control is engaged when the platform is guided by visual information. In addition, a guidance law utilizing IBVS is employed with modifications. To secure the view of an object with an eye-in-hand camera, IBVS is utilized with images taken from a fisheye camera. Also, to compensate underactuation of the multirotor, an image adjustment method is developed. With the proposed control and guidance laws, autonomous flight experiments involving grabbing and transporting an object are carried out. Successful experimental results demonstrate that the proposed approaches can be applied in various types of manipulation missions.", "title": "" }, { "docid": "c1a8e30586aad77395e429556545675c", "text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.", "title": "" }, { "docid": "80105a011097a3bd37bf58d030131e13", "text": "Deep CNNs have achieved great success in text detection. Most of existing methods attempt to improve accuracy with sophisticated network design, while paying less attention on speed. In this paper, we propose a general framework for text detection called Guided CNN to achieve the two goals simultaneously. The proposed model consists of one guidance subnetwork, where a guidance mask is learned from the input image itself, and one primary text detector, where every convolution and non-linear operation are conducted only in the guidance mask. The guidance subnetwork filters out non-text regions coarsely, greatly reducing the computation complexity. At the same time, the primary text detector focuses on distinguishing between text and hard non-text regions and regressing text bounding boxes, achieving a better detection accuracy. A novel training strategy, called background-aware block-wise random synthesis, is proposed to further boost up the performance. We demonstrate that the proposed Guided CNN is not only effective but also efficient with two state-of-the-art methods, CTPN [52] and EAST [64], as backbones. On the challenging benchmark ICDAR 2013, it speeds up CTPN by 2.9 times on average, while improving the F-measure by 1.5%. On ICDAR 2015, it speeds up EAST by 2.0 times while improving the F-measure by 1.0%. c © 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. * Zhanghui Kuang is the corresponding author 2 YUE ET AL: BOOSTING UP SCENE TEXT DETECTORS WITH GUIDED CNN Figure 1: Illustration of guiding the primary text detector. Convolutions and non-linear operations are conducted only in the guidance mask indicated by the red and blue rectangles. The guidance mask (the blue) is expanded by backgroundaware block-wise random synthesis (the red) during training. When testing, the guidance mask is not expanded. Figure 2: Text appears very sparsely in scene images. The left shows one example image. The right shows the text area ratio composition of ICDAR 2013 test set. Images with (0%,10%], (10%,20%], (20%,30%], and (30%,40%] text region account for 57%, 21%, 11%, and 6% respectively. Only 5 % images have more than 40% text region. 57% 21% 11% 6% 5% (0.0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,1.0]", "title": "" }, { "docid": "eb19d4473ef740323b7d419acc2604b2", "text": "The recurrent neural networks (RNNs) have shown good performance for sentence similarity modeling in recent years. Most RNNs focus on modeling the hidden states based on the current sentence, while the context information from the other sentence is not well investigated during the hidden state generation. In this paper, we propose a context-aligned RNN (CA-RNN) model, which incorporates the contextual information of the aligned words in a sentence pair for the inner hidden state generation. Specifically, we first perform word alignment detection to identify the aligned words in the two sentences. Then, we present a context alignment gating mechanism and embed it into our model to automatically absorb the aligned words’ context for the hidden state update. Experiments on three benchmark datasets, namely TREC-QA and WikiQA for answer selection and MSRP for paraphrase identification, show the great advantages of our proposed model. In particular, we achieve the new state-of-the-art performance on TREC-QA and WikiQA. Furthermore, our model is comparable to if not better than the recent neural network based approaches on MSRP. Introduction and Motivation Sentence similarity modeling plays an important role in various Natural Language Processing (NLP) tasks, such as answer selection and paraphrase identification. For the answer selection task, all the candidate answers are ranked by the sentence similarity with the given question (Wang, Smith, and Mitamura 2007; Yang, Yih, and Meek 2015). As to paraphrase identification, sentence similarity is used to determine whether two sentences have the same meaning (Yin and Schütze 2015; He, Gimpel, and Lin 2015). Most traditional methods rely on the feature engineering and linguistic tools, which are labour consuming and prone to the errors of NLP tools such as dependency parsing (Yih et al. 2013; Wan et al. 2006). Recently, the recurrent neural network (RNN) based approaches have attracted more attention due to the good performance and less human interventions. Specifically, a sequential hidden states were generated and aggregated for each sentence with RNN, and the similarity score was calculated according to the hidden representations (Mueller and Thyagarajan 2016). To capture the Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. salient information for better sentence representations, the attention based RNN models that produce a weight for each hidden state start to arouse more interest. (Santos et al. 2016) proposed the attentive pooling networks, which incorporated the word-by-word interactions for the attentive sentence representations. In (Tan et al. 2015), the representation of the question was utilized for the attentive weight generation for the answer. To the best of our knowledge, most attention based RNNs focus on generating the attentive weights after obtaining all the hidden states, while the contextual information from the other sentence is not well studied during the internal hidden state generation (Santos et al. 2016; Tan et al. 2015; Hermann et al. 2015). Noting that the inner activation units in RNN controls the information flow over a sentence, (Wang, Liu, and Zhao 2016) proposed an IARNN-GATE model, which incorporated the question representation into the active gates to influence the hidden state generation for the answer. However, it utilized all the information of the question sentence, which would bring noises if the current hidden state was not relevant to the question. To alleviate this problem, (Bahdanau, Cho, and Bengio 2014) presented an alignment model, which measured how well the input at each position matched the output for neural machine translation. Whereas, the alignment model in fact implemented the attention mechanism, and also leveraged all the input information to generate the output. Moreover, it is still unknown how to integrate the alignment information into RNN for sentence similarity modeling. In this paper, we propose a context-aligned RNN (CARNN) model, where the context information of the aligned words is incorporated into the hidden state generation. To be specific, we first perform word alignment detection to identify the aligned words that are potentially relevant in a sentence pair. Then, a context alignment gating mechanism is presented and embedded into our model, which consists of two steps, namely relevance measurement and context absorption. The relevance measurement step aims to determine how much context can be absorbed, by measuring the relevance between the other sentence and the current hidden state. In the context absorption step, the context information of the aligned words in the other sentence is absorbed for the current hidden state generation. It is worth noting that the absorbed context will be naturally propagated across The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "title": "" }, { "docid": "8244bb1d75e550beb417049afb1ff9d5", "text": "Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document’s content. For these kinds of data-rich, multiple-record documents (e.g. advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology—a conceptual model instance—that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.", "title": "" }, { "docid": "1b4eb25d20cd2ca431c2b73588021086", "text": "Machine rule induction was examined on a difficult categorization problem by applying a Holland-style classifier system to a complex letter recognition task. A set of 20,000 unique letter images was generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts. The parent fonts represented a full range of character types including script, italic, serif, and Gothic. The features of each of the 20,000 characters were summarized in terms of 16 primitive numerical attributes. Our research focused on machine induction techniques for generating IF-THEN classifiers in which the IF part was a list of values for each of the 16 attributes and the THEN part was the correct category, i.e., one of the 26 letters of the alphabet. We examined the effects of different procedures for encoding attributes, deriving new rules, and apportioning credit among the rules. Binary and Gray-code attribute encodings that required exact matches for rule activation were compared with integer representations that employed fuzzy matching for rule activation. Random and genetic methods for rule creation were compared with instance-based generalization. The strength/specificity method for credit apportionment was compared with a procedure we call “accuracy/utility.”", "title": "" }, { "docid": "0a3988fd53a4634853b4ab7e6522f870", "text": "DBSCAN is a well-known density based clustering algorithm capable of discovering arbitrary shaped clusters and eliminating noise data. However, parallelization of Dbscan is challenging as it exhibits an inherent sequential data access order. Moreover, existing parallel implementations adopt a master-slave strategy which can easily cause an unbalanced workload and hence result in low parallel efficiency.\n We present a new parallel Dbscan algorithm (Pdsdbscan) using graph algorithmic concepts. More specifically, we employ the disjoint-set data structure to break the access sequentiality of Dbscan. In addition, we use a tree-based bottom-up approach to construct the clusters. This yields a better-balanced workload distribution. We implement the algorithm both for shared and for distributed memory.\n Using data sets containing up to several hundred million high-dimensional points, we show that Pdsdbscan significantly outperforms the master-slave approach, achieving speedups up to 25.97 using 40 cores on shared memory architecture, and speedups up to 5,765 using 8,192 cores on distributed memory architecture.", "title": "" }, { "docid": "fd4a09ff9434b29e0d9f27ee72157a6a", "text": "An efficient compact rectenna based on a novel asymmetric loop-shaped planar inverted-F antenna (PIFA) is proposed for portable devices and wireless sensors. The antenna element, or the loop-shaped PIFA, has a compact uniplanar structure and exhibits higher radiation efficiency. The rectifier used is a half-bridge boost converter, which can double the output dc voltage. In between the rectifier and loop-shaped PIFA is a matching circuit with harmonic suppression capability, further enhancing the conversion efficiency. For the 2.45-GHz prototype design, the peak conversion efficiency of 61.4% is achieved as the incident power density is 7.4 mW/m2 and the load resistance is 510 Ω.", "title": "" }, { "docid": "6c8983865bf3d6bdbf120e0480345aac", "text": "In the future Internet of Things (IoT), smart objects will be the fundamental building blocks for the creation of cyber-physical smart pervasive systems in a great variety of application domains ranging from health-care to transportation, from logistics to smart grid and cities. The implementation of a smart objects-oriented IoT is a complex challenge as distributed, autonomous, and heterogeneous IoT components at different levels of abstractions and granularity need to cooperate among themselves, with conventional networked IT infrastructures, and also with human users. In this paper, we propose the integration of two complementary mainstream paradigms for large-scale distributed computing: Agents and Cloud. Agent-based computing can support the development of decentralized, dynamic, cooperating and open IoT systems in terms of multi-agent systems. Cloud computing can enhance the IoT objects with high performance computing capabilities and huge storage resources. In particular, we introduce a cloud-assisted and agent-oriented IoT architecture that will be realized through ACOSO, an agent-oriented middleware for cooperating smart objects, and BodyCloud, a sensor-cloud infrastructure for large-scale sensor-based systems.", "title": "" }, { "docid": "5ee544ed19ef78fa9212caea791ac4cf", "text": "This paper describes the ecosystem of R add-on packages deve lop d around the infrastructure provided by the packagearules. The packages provide comprehensive functionality for ana lyzing interesting patterns including frequent itemsets, associ ati n rules, frequent sequences and for building applications like associative classification. After di scussing the ecosystem’s design we illustrate the ease of mining and visualizing rules with a short example .", "title": "" }, { "docid": "4990409533e023d0a381731b06961e44", "text": "In some of these boxes, the setter puts some of the digits 1–9; the aim of the solver is to complete the grid by filling in a digit in every box in such a way that each row, each column, and each 3 × 3 box contains each of the digits 1–9 exactly once. In this note, we discuss the problem of enumerating all possible Sudoku grids. This is a very natural problem, but, perhaps surprisingly, it seems unlikely that the problem should have a simple combinatorial answer. Indeed, Sudoku grids are simply special cases of Latin squares, and the enumeration of Latin squares is itself a difficult problem, with no general combinatorial formulae known. Latin squares of sizes up to 11 × 11 have been enumerated, and the methods are broadly brute force calculations, much like the approach we sketch for Sudoku grids below. See [1], [2] and [3] for more details. It is known that the number of 9 × 9 Latin squares is 5524751496156892842531225600 ≈ 5.525 × 10. Since this answer is enormous, we need to refine our search considerably in order to be able to get an answer in a sensible amount of computing time.", "title": "" }, { "docid": "5d35e34a5db727917e5105f857c174be", "text": "Human face feature extraction using digital images is a vital element for several applications such as: identification and facial recognition, medical application, video games, cosmetology, etc. The skin pores are very important element of the structure of the skin. A novelty method is proposed allowing decomposing an photography of human face from digital image (RGB) in two layers, melanin and hemoglobin. From melanin layer, the main pores from the face can be obtained, as well as the centroids of each of them. It has been found that the pore configuration of the skin is invariant and unique for each individual. Therefore, from the localization of the pores of a human face, it is a possibility to use them for diverse application in the fields of pattern", "title": "" }, { "docid": "d02e9b22ebce99cd32630f56053248c7", "text": "Software-Defined Networking (SDN) has recently gained significant momentum. However, before any large scale deployments, it is important to understand security issues arising from this new technology. This paper discusses two types of Denial-of-Service (DoS) attacks specific to OpenFlow SDN networks. We emulate them on Mininet and provide an analysis on the effect of these attacks. We find that the timeout value of a flow rule, and the control plane bandwidth have a significant impact on the switch's capability. If not configured appropriately, they may allow successful DoS attacks. Finally, we highlight possible mitigation strategies to address such attacks.", "title": "" }, { "docid": "f13cbc36f2c51c5735185751ddc2500e", "text": "This paper presents an overview of the road and traffic sign detection and recognition. It describes the characteristics of the road signs, the requirements and difficulties behind road signs detection and recognition, how to deal with outdoor images, and the different techniques used in the image segmentation based on the colour analysis, shape analysis. It shows also the techniques used for the recognition and classification of the road signs. Although image processing plays a central role in the road signs recognition, especially in colour analysis, but the paper points to many problems regarding the stability of the received information of colours, variations of these colours with respect to the daylight conditions, and absence of a colour model that can led to a good solution. This means that there is a lot of work to be done in the field, and a lot of improvement can be achieved. Neural networks were widely used in the detection and the recognition of the road signs. The majority of the authors used neural networks as a recognizer, and as classifier. Some other techniques such as template matching or classical classifiers were also used. New techniques should be involved to increase the robustness, and to get faster systems for real-time applications.", "title": "" }, { "docid": "07c288560af7cbc7acc2ed4f87967d8f", "text": "X-ray imaging in differential interference contrast (DIC) with submicrometer optical resolution was performed by using a twin zone plate (TZP) setup generating focal spots closely spaced within the TZP spatial resolution of 160 nm. Optical path differences introduced by the sample are recorded by a CCD camera in a standard full-field imaging and by an aperture photodiode in a standard scanning transmission x-ray microscope. Applying this x-ray DIC technique, we demonstrate for both the full-field imaging and scanning x-ray microscope methods a drastic increase in image contrast (approximately 20x) for a low-absorbing specimen, similar to the Nomarski DIC method for visible-light microscopy.", "title": "" }, { "docid": "786a31d5c189c8376a08be6050ddbd9c", "text": "In this article, we present a meta-analysis of research examining visibility of disability. In interrogating the issue of visibility and invisibility in the design of assistive technologies, we open a discussion about how perceptions surrounding disability can be probed through an examination of visibility and how these tensions do, and perhaps should, influence assistive technology design and research.", "title": "" } ]
scidocsrr
d4ff3059aed2ca604266d6624950fd77
Comparing Deep Learning and Classical Machine Learning Approaches for Predicting Inpatient Violence Incidents from Clinical Text
[ { "docid": "897a6d208785b144b5d59e4f346134cd", "text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.", "title": "" }, { "docid": "6870efe6d9607c82992b5015a5336969", "text": "We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.", "title": "" }, { "docid": "9eedeec21ab380c0466ed7edfe7c745d", "text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.", "title": "" } ]
[ { "docid": "5f30867cb3071efa8fb0d34447b8a8f6", "text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.", "title": "" }, { "docid": "02255e736f7a6201fa52e6dd6056e81f", "text": "In regenerative medicine applications, the differentiation stage of implanted stem cells must be optimized to control cell fate and enhance therapeutic efficacy. We investigated the therapeutic potential of human induced pluripotent stem cell (iPSC)-derived cells at two differentiation stages on peripheral nerve regeneration. Neural crest stem cells (NCSCs) and Schwann cells (NCSC-SCs) derived from iPSCs were used to construct a tissue-engineered nerve conduit that was applied to bridge injured nerves in a rat sciatic nerve transection model. Upon nerve conduit implantation, the NCSC group showed significantly higher electrophysiological recovery at 1 month as well as better gastrocnemius muscle recovery at 5 months than the acellular group, but the NCSC-SC group didn’t. Both transplanted NCSCs and NCSC-SCs interacted with newly-growing host axons, while NCSCs showed better survival rate and distribution. The transplanted NCSCs mainly differentiated into Schwann cells with no teratoma formation, and they secreted higher concentrations of brain-derived neurotrophic factor and nerve growth factor than NCSC-SCs. In conclusion, transplantation of iPSC-NCSCs accelerated functional nerve recovery with the involvement of stem cell differentiation and paracrine signaling. This study unravels the in vivo performance of stem cells during tissue regeneration, and provides a rationale of using appropriate stem cells for regenerative medicine.", "title": "" }, { "docid": "9b010450862f5b3b73273028242db8ad", "text": "A number of mechanisms ensure that the intestine is protected from pathogens and also against our own intestinal microbiota. The outermost of these is the secreted mucus, which entraps bacteria and prevents their translocation into the tissue. Mucus contains many immunomodulatory molecules and is largely produced by the goblet cells. These cells are highly responsive to the signals they receive from the immune system and are also able to deliver antigens from the lumen to dendritic cells in the lamina propria. In this Review, we will give a basic overview of mucus, mucins and goblet cells, and explain how each of these contributes to immune regulation in the intestine.", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "f3e46ac749c24be63d29abcb32617c3c", "text": "In this paper, we discuss key findings, technological challenges and socioeconomic opportunities in Smart City era. Most of the conclusions were gathered during SmartSantander project, an EU project that is developing a city-scale testbed for IoT and Future Internet experimentation, providing an integrated framework for implementing Smart City services.", "title": "" }, { "docid": "9ed2f6172271c6ccdba2ab16e2d6b3d6", "text": "An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.", "title": "" }, { "docid": "7d9f003bcce3f99b096e3dcd5d849f6d", "text": "Anti-Money Laundering (AML) can be seen as a central problem for financial institutions because of the need to detect compliance violations in various customer contexts. Changing regulations and the strict supervision of financial authorities create an even higher pressure to establish an effective working compliance program. To support financial institutions in building a simple but efficient compliance program we develop a reference model that describes the process and data view for one key process of AML based on literature analysis and expert interviews. Therefore, this paper describes the customer identification process (CIP) as a part of an AML program using reference modeling techniques. The contribution of this work is (i) the application of multi-perspective reference modeling resulting in (ii) a reference model for AML customer identification. Overall, the results help to understand the complexity of AML processes and to establish a sustainable compliance program.", "title": "" }, { "docid": "947665b0950b0bb24cc246758474266f", "text": "Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. At the moment, however, the information retrieval support is limited. We present a formal model and a new search algorithm for folksonomies, calledFolkRank, that exploits the structure of the folksonomy. The proposed algorithm is also applied to find communities within the folksonomy and is used to structure search results. All findings are demonstrated on a large scale dataset.", "title": "" }, { "docid": "345f54e3a6d00ecb734de529ed559933", "text": "Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. The maximum switching frequency and the maximum input voltage range, respectively, is limited by the minimum propagated on-time pulse, which is mainly determined by the level shifter speed. At switching frequencies above 10 MHz, a voltage conversion with an input voltage range up to 50 V and output voltages below 5 V requires an on-time of a pulse width modulated signal of less than 5 ns. This cannot be achieved with conventional level shifters. This paper presents a level shifter circuit, which controls an NMOS power FET on a high-voltage domain up to 50 V. The level shifter was implemented as part of a DCDC converter in a 180 nm BiCMOS technology. Experimental results confirm a propagation delay of 5 ns and on-time pulses of less than 3 ns. An overlapping clamping structure with low parasitic capacitances in combination with a high-speed comparator makes the level shifter also very robust against large coupling currents during high-side transitions as fast as 20 V/ns, verified by measurements. Due to the high dv/dt, capacitive coupling currents can be two orders of magnitude larger than the actual signal current. Depending on the conversion ratio, the presented level shifter enables an increase of the switching frequency for multi-MHz converters towards 100 MHz. It supports high input voltages up to 50 V and it can be applied also to other high-speed applications.", "title": "" }, { "docid": "fd9a5d3158a0079431ee4d740e5e24ba", "text": "Justifying art activities in early childhood education seems like a trivial task. Everyone knows that young children love to draw, dip their fingers in paint or squeeze playdough to create images and forms that only those with hardened hearts would find difficult to appreciate. Children seem happier when they have access to art materials and supplies than when they are denied such opportunities, and usually they do not need much invitation to spontaneously take advantage of these offerings. The outcomes of children’s “art play” tend to fascinate adult audiences – and adults: from artistically naïve parents, through psychologists and therapists, to researchers specifically studying artistic development – have long attempted to understand their significance and meaning. Early childhood classrooms are cheerful and require minimal budgets to decorate with the abundance of children’s art. Early childhood parents and teachers also trust, or at least hope, that there are some significant formative benefits to children from their engagement in art activities, such as development of creativity.", "title": "" }, { "docid": "18969bed489bb9fa7196634a8086449e", "text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.", "title": "" }, { "docid": "1e4ea38a187881d304ea417f98a608d1", "text": "Breast cancer represents the second leading cause of cancer deaths in women today and it is the most common type of cancer in women. This paper presents some experiments for tumour detection in digital mammography. We investigate the use of different data mining techniques, neural networks and association rule mining, for anomaly detection and classification. The results show that the two approaches performed well, obtaining a classification accuracy reaching over 70% percent for both techniques. Moreover, the experiments we conducted demonstrate the use and effectiveness of association rule mining in image categorization.", "title": "" }, { "docid": "8588a3317d4b594d8e19cb005c3d35c7", "text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.", "title": "" }, { "docid": "07b2355844efc85862fb5b8122be6edf", "text": "As with other types of evidence, the courts make no presumption that digital evidence is reliable without some evidence of empirical testing in relation to the theories and techniques associated with its production. The issue of reliability means that courts pay close attention to the manner in which electronic evidence has been obtained and in particular the process in which the data is captured and stored. Previous process models have tended to focus on one particular area of digital forensic practice, such as law enforcement, and have not incorporated a formal description. We contend that this approach has prevented the establishment of generally-accepted standards and processes that are urgently needed in the domain of digital forensics. This paper presents a generic process model as a step towards developing such a generally-accepted standard for a fundamental digital forensic activity–the acquisition of digital evidence.", "title": "" }, { "docid": "95ce70e3c893aac8036af7aab1e9c0ac", "text": "Wireless communications is one of the most successful technologies in modern years, given that an exponential growth rate in wireless traffic has been sustained for over a century (known as Cooper's law). This trend will certainly continue, driven by new innovative applications; for example, augmented reality and the Internet of Things. Massive MIMO has been identified as a key technology to handle orders of magnitude more data traffic. Despite the attention it is receiving from the communication community, we have personally witnessed that Massive MIMO is subject to several widespread misunderstandings, as epitomized by following (fictional) abstract: “The Massive MIMO technology uses a nearly infinite number of high-quality antennas at the base stations. By having at least an order of magnitude more antennas than active terminals, one can exploit asymptotic behaviors that some special kinds of wireless channels have. This technology looks great at first sight, but unfortunately the signal processing complexity is off the charts and the antenna arrays would be so huge that it can only be implemented in millimeter-wave bands.” These statements are, in fact, completely false. In this overview article, we identify 10 myths and explain why they are not true. We also ask a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly. We provide references to key technical papers that support our claims, while a further list of related overview and technical papers can be found at the Massive MIMO Info Point: http://massivemimo. eu.", "title": "" }, { "docid": "c196444f2093afc3092f85b8fbb67da5", "text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.", "title": "" }, { "docid": "8750e04065d8f0b74b7fee63f4966e59", "text": "The Customer churn is a crucial activity in rapidly growing and mature competitive telecommunication sector and is one of the greatest importance for a project manager. Due to the high cost of acquiring new customers, customer churn prediction has emerged as an indispensable part of telecom sectors’ strategic decision making and planning process. It is important to forecast customer churn behavior in order to retain those customers that will churn or possible may churn. This study is another attempt which makes use of rough set theory, a rule-based decision making technique, to extract rules for churn prediction. Experiments were performed to explore the performance of four different algorithms (Exhaustive, Genetic, Covering, and LEM2). It is observed that rough set classification based on genetic algorithm, rules generation yields most suitable performance out of the four rules generation algorithms. Moreover, by applying the proposed technique on publicly available dataset, the results show that the proposed technique can fully predict all those customers that will churn or possibly may churn and also provides useful information to strategic decision makers as well.", "title": "" }, { "docid": "e573d85271e3f3cc54b774de8a5c6dd9", "text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.", "title": "" }, { "docid": "12531afcc6d9ecdec39adef0d0e6b391", "text": "Convolutional Neural Networks (ConvNets) have successfully contributed to improve the accuracy of regression-based methods for computer vision tasks such as human pose estimation, landmark localization, and object detection. The network optimization has been usually performed with L2 loss and without considering the impact of outliers on the training process, where an outlier in this context is defined by a sample estimation that lies at an abnormal distance from the other training sample estimations in the objective space. In this work, we propose a regression model with ConvNets that achieves robustness to such outliers by minimizing Tukey's biweight function, an M-estimator robust to outliers, as the loss function for the ConvNet. In addition to the robust loss, we introduce a coarse-to-fine model, which processes input images of progressively higher resolutions for improving the accuracy of the regressed values. In our experiments, we demonstrate faster convergence and better generalization of our robust loss function for the tasks of human pose estimation and age estimation from face images. We also show that the combination of the robust loss function with the coarse-to-fine model produces comparable or better results than current state-of-the-art approaches in four publicly available human pose estimation datasets.", "title": "" } ]
scidocsrr
11aa79b9c38b148c67aef7f4b97de0ca
Semi-Supervised Learning with Generative Adversarial Networks
[ { "docid": "b6a8f45bd10c30040ed476b9d11aa908", "text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.", "title": "" }, { "docid": "3eef0b6dee8d62e58a9369ed1e03d8ba", "text": "Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets2.", "title": "" }, { "docid": "35293c16985878fca24b5a327fd52c72", "text": "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method – which we dub categorical generative adversarial networks (or CatGAN) – on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).", "title": "" } ]
[ { "docid": "e7e52c1af1b9ff03c734fc56bed6fa9b", "text": "Rarely are computing systems developed entirely by members of the communities they serve, particularly when that community is underrepresented in computing. Archive of Our Own (AO3), a fan fiction archive with nearly 750,000 users and over 2 million individual works, was designed and coded primarily by women to meet the needs of the online fandom community. Their design decisions were informed by existing values and norms around issues such as accessibility, inclusivity, and identity. We conducted interviews with 28 users and developers, and with this data we detail the history and design of AO3 using the framework of feminist HCI and focusing on the successful incorporation of values into design. We conclude with considering examples of complexity in values in design work: the use of design to mitigate tensions in values and to influence value formation or change.", "title": "" }, { "docid": "3465c3bc8f538246be5d7f8c8d1292c2", "text": "The minimal depth of a maximal subtree is a dimensionless order statistic measuring the predictiveness of a variable in a survival tree. We derive the distribution of the minimal depth and use it for high-dimensional variable selection using random survival forests. In big p and small n problems (where p is the dimension and n is the sample size), the distribution of the minimal depth reveals a “ceiling effect” in which a tree simply cannot be grown deep enough to properly identify predictive variables. Motivated by this limitation, we develop a new regularized algorithm, termed RSF-Variable Hunting. This algorithm exploits maximal subtrees for effective variable selection under such scenarios. Several applications are presented demonstrating the methodology, including the problem of gene selection using microarray data. In this work we focus only on survival settings, although our methodology also applies to other random forests applications, including regression and classification settings. All examples presented here use the R-software package randomSurvivalForest.", "title": "" }, { "docid": "5cd68b483657180231786dc5a3407c85", "text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.", "title": "" }, { "docid": "b113d45660629847afbd7faade1f3a71", "text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.", "title": "" }, { "docid": "df62526aa79eb750790bd48254171faf", "text": "SUMMARY Non-safety critical software developers have been reaping the benefits of adopting agile practices for a number of years. However, developers of safety critical software often have concerns about adopting agile practices. Through performing a literature review, this research has identified the perceived barriers to following agile practices when developing medical device software. A questionnaire based survey was also conducted with medical device software developers in Ireland to determine the barriers to adopting agile practices. The survey revealed that half of the respondents develop software in accordance with a plan driven software development lifecycle and that they believe that there are a number of perceived barriers to adopting agile practices when developing regulatory compliant software such as: being contradictory to regulatory requirements; insufficient coverage of risk management activities and the lack of up-front planning. In addition, a comparison is performed between the perceived and actual barriers. Based upon the findings of the literature review and survey, it emerged that no external barriers exist to adopting agile practices when developing medical device software and the barriers that do exists are internal barriers such as getting stakeholder buy in.", "title": "" }, { "docid": "259a530bcd24668e863b69559e41e425", "text": "Perceptual quality assessment of 3D triangular meshes is crucial for a variety of applications. In this paper, we present a new objective metric for assessing the visual difference between a reference triangular mesh and its distorted version produced by lossy operations, such as noise addition, simplification, compression and watermarking. The proposed metric is based on the measurement of the distance between curvature tensors of the two meshes under comparison. Our algorithm uses not only tensor eigenvalues (i.e., curvature amplitudes) but also tensor eigenvectors (i.e., principal curvature directions) to derive a perceptually-oriented tensor distance. The proposed metric also accounts for the visual masking effect of the human visual system, through a roughness-based weighting of the local tensor distance. A final score that reflects the visual difference between two meshes is obtained via a Minkowski pooling of the weighted local tensor distances over the mesh surface. We validate the performance of our algorithm on four subjectively-rated visual mesh quality databases, and compare the proposed method with state-of-the-art objective metrics. Experimental results show that our approach achieves high correlation between objective scores and subjective assessments.", "title": "" }, { "docid": "0c2a2cb741d1d22c5ef3eabd0b525d8d", "text": "Part-of-speech (POS) tagging is a process of assigning the words in a text corresponding to a particular part of speech. A fundamental version of POS tagging is the identification of words as nouns, verbs, adjectives etc. For processing natural languages, Part of Speech tagging is a prominent tool. It is one of the simplest as well as most constant and statistical model for many NLP applications. POS Tagging is an initial stage of linguistics, text analysis like information retrieval, machine translator, text to speech synthesis, information extraction etc. In POS Tagging we assign a Part of Speech tag to each word in a sentence and literature. Various approaches have been proposed to implement POS taggers. In this paper we present a Marathi part of speech tagger. It is morphologically rich language. Marathi is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear idea about all the algorithms with suitable examples. It also introduces a tag set for Marathi which can be used for tagging Marathi text. In this paper we have shown the development of the tagger as well as compared to check the accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram, Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82% respectively.", "title": "" }, { "docid": "a7373d69f5ff9d894a630cc240350818", "text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.", "title": "" }, { "docid": "725e5296eb7d86273a25abcb26f89d84", "text": "Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.", "title": "" }, { "docid": "53e7e1053129702b7fc32b32d11656da", "text": "A new and robust constant false alarm rate (CFAR) detector based on truncated statistics (TSs) is proposed for ship detection in single-look intensity and multilook intensity synthetic aperture radar data. The approach is aimed at high-target-density situations such as busy shipping lines and crowded harbors, where the background statistics are estimated from potentially contaminated sea clutter samples. The CFAR detector uses truncation to exclude possible statistically interfering outliers and TSs to model the remaining background samples. The derived truncated statistic CFAR (TS-CFAR) algorithm does not require prior knowledge of the interfering targets. The TS-CFAR detector provides accurate background clutter modeling, a stable false alarm regulation property, and improved detection performance in high-target-density situations.", "title": "" }, { "docid": "82f9792e20e93b4c29bec4abb98db2c9", "text": "As the era of cloud technology arises, more and more people are beginning to migrate their applications and personal data to the cloud. This makes web-based applications an attractive target for cyber-attacks. As a result, web-based applications now need more protections than ever. However, current anomaly-based web attack detection approaches face the difficulties like unsatisfying accuracy and lack of generalization. And the rule-based web attack detection can hardly fight unknown attacks and is relatively easy to bypass. Therefore, we propose a novel deep learning approach to detect anomalous requests. Our approach is to first train two Recurrent Neural Networks (RNNs) with the complicated recurrent unit (LSTM unit or GRU unit) to learn the normal request patterns using only normal requests unsupervisedly and then supervisedly train a neural network classifier which takes the output of RNNs as the input to discriminate between anomalous and normal requests. We tested our model on two datasets and the results showed that our model was competitive with the state-of-the-art. Our approach frees us from feature selection. Also to the best of our knowledge, this is the first time that the RNN is applied on anomaly-based web attack detection systems.", "title": "" }, { "docid": "0a855a4e04d5b2c34d6f03653ad93daf", "text": "The analysis of human activities is one of the most intriguing and important open issues for the automated video surveillance community. Since few years ago, it has been handled following a mere Computer Vision and Pattern Recognition perspective, where an activity corresponded to a temporal sequence of explicit actions (run, stop, sit, walk, etc.). Even under this simplistic assumption, the issue is hard, due to the strong diversity of the people appearance, the number of individuals considered (we may monitor single individuals, groups, crowd), the variability of the environmental conditions (indoor/outdoor, different weather conditions), and the kinds of sensors employed. More recently, the automated surveillance of human activities has been faced considering a new perspective, that brings in notions and principles from the social, affective, and psychological literature, and that is called Social Signal Processing (SSP). SSP employs primarily nonverbal cues, most of them are outside of conscious awareness, like face expressions and gazing, body posture and gestures, vocal characteristics, relative distances in the space and the like. This paper is the first review analyzing this new trend, proposing a structured snapshot of the state of the art and envisaging novel challenges in the surveillance domain where the cross-pollination of Computer Science technologies and Sociology theories may offer valid investigation strategies.", "title": "" }, { "docid": "c724ba8456a0e19fc440ff4d7297faee", "text": "Digital camera sensors are sensitive to wavelengths ranging from the ultraviolet (200-400nm) to the near-infrared (700-100nm) bands. This range is, however, reduced because the aim of photographic cameras is to capture and reproduce the visible spectrum (400-700nm) only. Ultraviolet radiation is filtered out by the optical elements of the camera, while a specifically designed “hot-mirror” is placed in front of the sensor to prevent near-infrared contamination of the visible image. We propose that near-infrared data can actually prove remarkably useful in colour constancy, to estimate the incident illumination as well as providing to detect the location of different illuminants in a multiply lit scene. Looking at common illuminants spectral power distribution show that very strong differences exist between the near-infrared and visible bands, e.g., incandescent illumination peaks in the near-infrared while fluorescent sources are mostly confined to the visible band. We show that illuminants can be estimated by simply looking at the ratios of two images: a standard RGB image and a near-infrared only image. As the differences between illuminants are amplified in the near-infrared, this estimation proves to be more reliable than using only the visible band. Furthermore, in most multiple illumination situations one of the light will be predominantly near-infrared emitting (e.g., flash, incandescent) while the other will be mostly visible emitting (e.g., fluorescent, skylight). Using near-infrared and RGB image ratios allow us to accurately pinpoint the location of diverse illuminant and recover a lighting map.", "title": "" }, { "docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a", "text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.", "title": "" }, { "docid": "e4f22595c5b4c6865b2288b4b99373c7", "text": "Received Jan 9, 2018 Revised Mar 2, 2018 Accepted Mar 18, 2018 Today threat landscape evolving at the rapid rate with much organization continuously face complex and malicious cyber threats. Cybercriminal equipped by better skill, organized and well-funded than before. Cyber Threat Intelligence (CTI) has become a hot topic and being under consideration for many organization to counter the rise of cyber-attacks. The aim of this paper is to review the existing research related to CTI. Through the literature review process, the most basic question of what CTI is examines by comparing existing definitions to find common ground or disagreements. It is found that both organization and vendors lack a complete understanding of what information is considered to be CTI, hence more research is needed in order to define CTI. This paper also identified current CTI product and services that include threat intelligence data feeds, threat intelligence standards and tools that being used in CTI. There is an effort by specific industry to shared only relevance threat intelligence data feeds such as Financial Services Information Sharing and Analysis Center (FS-ISAC) that collaborate on critical security threats facing by global financial services sector only. While research and development center such as MITRE working in developing a standards format (e.g.; STIX, TAXII, CybOX) for threat intelligence sharing to solve interoperability issue between threat sharing peers. Based on the review for CTI definition, standards and tools, this paper identifies four research challenges in cyber threat intelligence and analyses contemporary work carried out in each. With an organization flooded with voluminous of threat data, the requirement for qualified threat data analyst to fully utilize CTI and turn the data into actionable intelligence become more important than ever. The data quality is not a new issue but with the growing adoption of CTI, further research in this area is needed", "title": "" }, { "docid": "ea7e40725f649b89b7574380d868a0ef", "text": "OBJECTIVES\nWe sought to examine the prevalence of reciprocal (i.e., perpetrated by both partners) and nonreciprocal intimate partner violence and to determine whether reciprocity is related to violence frequency and injury.\n\n\nMETHODS\nWe analyzed data on young US adults aged 18 to 28 years from the 2001 National Longitudinal Study of Adolescent Health, which contained information about partner violence and injury reported by 11,370 respondents on 18761 heterosexual relationships.\n\n\nRESULTS\nAlmost 24% of all relationships had some violence, and half (49.7%) of those were reciprocally violent. In nonreciprocally violent relationships, women were the perpetrators in more than 70% of the cases. Reciprocity was associated with more frequent violence among women (adjusted odds ratio [AOR]=2.3; 95% confidence interval [CI]=1.9, 2.8), but not men (AOR=1.26; 95% CI=0.9, 1.7). Regarding injury, men were more likely to inflict injury than were women (AOR=1.3; 95% CI=1.1, 1.5), and reciprocal intimate partner violence was associated with greater injury than was nonreciprocal intimate partner violence regardless of the gender of the perpetrator (AOR=4.4; 95% CI=3.6, 5.5).\n\n\nCONCLUSIONS\nThe context of the violence (reciprocal vs nonreciprocal) is a strong predictor of reported injury. Prevention approaches that address the escalation of partner violence may be needed to address reciprocal violence.", "title": "" }, { "docid": "b68f0c4aa0b5638a2a426bf9bd97a2ab", "text": "The interrelationship between ionizing radiation and the immune system is complex, multifactorial, and dependent on radiation dose/quality and immune cell type. High-dose radiation usually results in immune suppression. On the contrary, low-dose radiation (LDR) modulates a variety of immune responses that have exhibited the properties of immune hormesis. Although the underlying molecular mechanism is not fully understood yet, LDR has been used clinically for the treatment of autoimmune diseases and malignant tumors. These advancements in preclinical and clinical studies suggest that LDR-mediated immune modulation is a well-orchestrated phenomenon with clinical potential. We summarize recent developments in the understanding of LDR-mediated immune modulation, with an emphasis on its potential clinical applications.", "title": "" }, { "docid": "af254a16b14a3880c9b8fe5b13f1a695", "text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.", "title": "" }, { "docid": "47a0704b6a762ca8fc2561961924da71", "text": "Mobile apps are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor design choices, known as antipatterns, which may incidentally degrade software quality and performance. Thus, the automatic detection of antipatterns is an important activity that eases both maintenance and evolution tasks. Moreover, it guides developers to refactor their applications and thus, to improve their quality. While antipatterns are well-known in object-oriented applications, their study in mobile applications is still in their infancy. In this paper, we propose a tooled approach, called Paprika, to analyze Android applications and to detect object-oriented and Androidspecific antipatterns from binaries of mobile apps. We validate the effectiveness of our approach on a set of popular mobile apps downloaded from the Google Play Store.", "title": "" }, { "docid": "807bc95371e23037afb24376a0213d43", "text": "The invariant feature detectors are essential components in many computer vision applications, such as tracking, simultaneous localization and mapping (SLAM), image search, machine vision, object recognition, 3D reconstruction from multiple images, augmented reality, stereo vision, and others. However, it is very challenging to detect high quality features while maintaining a low computational cost. Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF) algorithms exhibit great performance under a variety of image transformations, however these methods rely on costly keypoint’s detection. Recently, fast and efficient variants such as Binary Robust Invariant Scalable Keypoints (BRISK) and Oriented Fast and Rotated BRIEF (ORB) were developed to offset the computational burden of these traditional detectors. In this paper, we propose to improve the Good Features to Track (GFTT) detector, coined IGFTT. It approximates or even outperforms the state-of-art detectors with respect to repeatability, distinctiveness, and robustness, yet can be computed much faster than Maximally Stable Extremal Regions (MSER), SIFT, BRISK, KAZE, Accelerated KAZE (AKAZE) and SURF. This is achieved by using the search of maximal-minimum eigenvalue in the image on scale-space and a new orientation extraction method based on eigenvectors. A comprehensive evaluation on standard datasets shows that IGFTT achieves quite a high performance with a computation time comparable to state-of-the-art real-time features. The proposed method shows exceptionally good performance compared to SURF, ORB, GFTT, MSER, Star, SIFT, KAZE, AKAZE and BRISK.", "title": "" } ]
scidocsrr
359458e94bf48571fda4c8a139872ffe
Are deep neural networks the best choice for modeling source code?
[ { "docid": "0834473b45a9b009da458a8d5009cfa0", "text": "Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project's code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected changesets do contain code significantly less similar to the project than accepted ones; furthermore, the less similar changesets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project's code style.", "title": "" }, { "docid": "6efc8d18baa63945eac0c2394f29da19", "text": "Deep learning subsumes algorithms that automatically learn compositional representations. The ability of these models to generalize well has ushered in tremendous advances in many fields such as natural language processing (NLP). Recent research in the software engineering (SE) community has demonstrated the usefulness of applying NLP techniques to software corpora. Hence, we motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models. Our deep learning models are applicable to source code files (since they only require lexically analyzed source code written in any programming language) and other types of artifacts. We show how a particular deep learning model can remember its state to effectively model sequential data, e.g., streaming software tokens, and the state is shown to be much more expressive than discrete tokens in a prefix. Then we instantiate deep learning models and show that deep learning induces high-quality models compared to n-grams and cache-based n-grams on a corpus of Java projects. We experiment with two of the models' hyperparameters, which govern their capacity and the amount of context they use to inform predictions, before building several committees of software language models to aid generalization. Then we apply the deep learning models to code suggestion and demonstrate their effectiveness at a real SE task compared to state-of-the-practice models. Finally, we propose avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts. Thus, our work serves as the first step toward deep learning software repositories.", "title": "" } ]
[ { "docid": "c87487289136493c3418fd39bf9fb0b3", "text": "Inductive power transfer (IPT) systems for transmitting tens to hundreds of watts have been reported for almost a decade. Most of the work has concentrated on the optimization of the link efficiency and has not taken into account the efficiency of the driver. Class-E amplifiers have been identified as ideal drivers for IPT applications, but their power handling capability at tens of megahertz has been a crucial limiting factor, since the load and inductor characteristics are set by the requirements of the resonant inductive system. The frequency limitation of the driver restricts the unloaded Q-factor of the coils and thus the link efficiency. With a suitable driver, copper coil unloaded Q factors of over 1000 can be achieved in the low megahertz region, enabling a cost-effective high Q coil assembly. The system presented in this paper alleviates the use of heavy and expensive field-shaping techniques by presenting an efficient IPT system capable of transmitting energy with a dc-to-load efficiency above 77% at 6 MHz across a distance of 30 cm. To the authors knowledge, this is the highest dc-to-load efficiency achieved for an IPT system without introducing restrictive coupling factor enhancement techniques.", "title": "" }, { "docid": "c976fcbe0c095a4b7cfd6e3968964c55", "text": "The introduction of Network Functions Virtualization (NFV) enables service providers to offer software-defined network functions with elasticity and flexibility. Its core technique, dynamic allocation procedure of NFV components onto cloud resources requires rapid response to changes on-demand to remain cost and QoS effective. In this paper, Markov Decision Process (MDP) is applied to the NP-hard problem to dynamically allocate cloud resources for NFV components. In addition, Bayesian learning method is applied to monitor the historical resource usage in order to predict future resource reliability. Experimental results show that our proposed strategy outperforms related approaches.", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "6f7c81d869b4389d5b84e80b4c306381", "text": "Environmental, genetic, and immune factors are at play in the development of the variable clinical manifestations of Graves' ophthalmopathy (GO). Among the environmental contributions, smoking is the risk factor most consistently linked to the development or worsening of the disease. The close temporal relationship between the diagnoses of Graves' hyperthyroidism and GO have long suggested that these 2 autoimmune conditions may share pathophysiologic features. The finding that the thyrotropin receptor (TSHR) is expressed in orbital fibroblasts, the target cells in GO, supported the notion of a common autoantigen. Both cellular and humeral immunity directed against TSHR expressed on orbital fibroblasts likely initiate the disease process. Activation of helper T cells recognizing TSHR peptides and ligation of TSHR by TRAb lead to the secretion of inflammatory cytokines and chemokines, and enhanced hyaluronic acid (HA) production and adipogenesis. The resulting connective tissue remodeling results in varying degrees extraocular muscle enlargement and orbital fat expansion. A subset of orbital fibroblasts express CD34, are bone-marrow derived, and circulate as fibrocytes that infiltrate connective tissues at sites of injury or inflammation. As these express high levels of TSHR and are capable of producing copious cytokines and chemokines, they may represent an orbital fibroblast population that plays a central role in GO development. In addition to TSHR, orbital fibroblasts from patients with GO express high levels of IGF-1R. Recent studies suggest that these receptors engage in cross-talk induced by TSHR ligation to synergistically enhance TSHR signaling, HA production, and the secretion of inflammatory mediators.", "title": "" }, { "docid": "83f18d74ca28f615899f185bc592c9a4", "text": "A simple circuit technique is presented for improving poor midband power supply rejection ratio (PSRR) of single ended amplifiers that use Miller capacitance to set the location of the dominant pole. The principle of the technique is to create an additional parallel signal path from the power supply to the output, which cancels the dominating unity gain signal path through the output stage and Miller capacitor above the dominant pole frequency. Simulation results of a two-stage amplifier show that more than a 20dB improvement in the midband PSRR is obtainable as compared with an amplifier without the suggested circuit", "title": "" }, { "docid": "ba3522be00805402629b4fb4a2c21cc4", "text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW", "title": "" }, { "docid": "c7dd6824c8de3e988bb7f58141458ef9", "text": "We present a method to classify images into different categories of pornographic content to create a system for filtering pornographic images from network traffic. Although different systems for this application were presented in the past, most of these systems are based on simple skin colour features and have rather poor performance. Recent advances in the image recognition field in particular for the classification of objects have shown that bag-of-visual-words-approaches are a good method for many image classification problems. The system we present here, is based on this approach, uses a task-specific visual vocabulary and is trained and evaluated on an image database of 8500 images from different categories. It is shown that it clearly outperforms earlier systems on this dataset and further evaluation on two novel web-traffic collections shows the good performance of the proposed system.", "title": "" }, { "docid": "099dbf8d4c0b401cd3389583eb4495f3", "text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.", "title": "" }, { "docid": "a354949d97de673e71510618a604e264", "text": "Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist–Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms–0.37ms, which is promising for real-time applications.", "title": "" }, { "docid": "2990de2e037498b22fb66b3ddc635d49", "text": "Class imbalance is a problem that is common to many application domains. When examples of one class in a training data set vastly outnumber examples of the other class(es), traditional data mining algorithms tend to create suboptimal classification models. Several techniques have been used to alleviate the problem of class imbalance, including data sampling and boosting. In this paper, we present a new hybrid sampling/boosting algorithm, called RUSBoost, for learning from skewed training data. This algorithm provides a simpler and faster alternative to SMOTEBoost, which is another algorithm that combines boosting and data sampling. This paper evaluates the performances of RUSBoost and SMOTEBoost, as well as their individual components (random undersampling, synthetic minority oversampling technique, and AdaBoost). We conduct experiments using 15 data sets from various application domains, four base learners, and four evaluation metrics. RUSBoost and SMOTEBoost both outperform the other procedures, and RUSBoost performs comparably to (and often better than) SMOTEBoost while being a simpler and faster technique. Given these experimental results, we highly recommend RUSBoost as an attractive alternative for improving the classification performance of learners built using imbalanced data.", "title": "" }, { "docid": "e9e37212a793588b0e86075961ed8b9f", "text": "This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, a Artificial neural network is chosen for the training and classification process. Although the steps are simple, and the simplest network is chosen for the training and recognition process.", "title": "" }, { "docid": "2473b8e8deb0e4b79ac56d49a3894349", "text": "Generalized hyperpigmentation (GHPT) of the skin may occur as a primary defect of pigmentation or in combination with other variable manifestations. It is visible in a number of diseases such as Addison’s disease (AD), haemochromatosis, porphyria cutanea tarda, scleroderma and neurofibromatosis, but it can also be associated with malignancy and the use of chemotherapeutics or it can be related to acanthosis nigricans in insulin resistance. Skin pigmentation depends on the differences in the amount, type and distribution of melanin produced during melanogenesis in skin melanocytes [1] and remains under the genetic control of more than 120 genes [2]. The most important one is the melanocortin 1 receptor (MC1R) gene [3] (OMIM ID: 155555) located on chromosome 16q24.3 and encoding for a 317-amino-acid G-protein coupled receptor. The MC1R receptor binds α-melanocyte-stimulating hormone (α-MSH) resulting in the activation of adenylyl cyclase, which produces cyclic adenosine monophosphate (cAMP). The increased cAMP concentration activates various intracellular molecular pathways, promotes melanin synthesis and increases the eumelanin to pheomelanin ratio [4]. MC1R receptor also binds ACTH, in this way contributing to the GHPT in AD. Upregulation of MC1R gene expression by UV radiation and α-MSH leads to enhancement of melanogenesis and melanin synthesis induction. Loss-of-function mutations in the MC1R gene are associated with fair skin, poor tanning, propensity to freckles and increased skin cancer risk due to a decrease in eumelanin synthesis and subsequently impaired protection against UV radiation [5-7]. To our knowledge, to date, no data are available considering gain-of-function mutations in the human MC1R gene which could lead to a constant activation of the MC1R receptor and subsequently cause GHPT. We present the case of a patient with a primary type of progressive GHPT in whom AD was suspected. An 11-year-old prepubertal girl with GHPT (Figures 1A-C) was born at term with normal birth weight and height and was first brought to our hospital at the age of 3 years with a suspicion of AD. She had a diffuse grey-brownish discoloration of the skin present since birth. Over the first few years of life she developed symmetrical hyperpigmentation most pronounced on her trunk and neck. Later, hyperpigmentation began to affect her hands and feet, and finally the whole body – sparing only the cheeks and finger tips. Her skin was very dry and atopic, and scars were not hyperCorresponding autor: Assoc. Prof. Marek Niedziela MD, PhD Department of Paediatric Endocrinology and Rheumatology Poznan University of Medical Sciences 27/33 Szpitalna St 60-572 Poznan, Poland Phone: +48 61 849 14 81 Fax: 48 61 848 02 91 E-mail: mniedzie@ump.edu.pl Letter to the Editor", "title": "" }, { "docid": "39539ad490065e2a81b6c07dd11643e5", "text": "Stock prices are formed based on short and/or long-term commercial and trading activities that reflect different frequencies of trading patterns. However, these patterns are often elusive as they are affected by many uncertain political-economic factors in the real world, such as corporate performances, government policies, and even breaking news circulated across markets. Moreover, time series of stock prices are non-stationary and non-linear, making the prediction of future price trends much challenging. To address them, we propose a novel State Frequency Memory (SFM) recurrent network to capture the multi-frequency trading patterns from past market data to make long and short term predictions over time. Inspired by Discrete Fourier Transform (DFT), the SFM decomposes the hidden states of memory cells into multiple frequency components, each of which models a particular frequency of latent trading pattern underlying the fluctuation of stock price. Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion. Modeling multi-frequency trading patterns can enable more accurate predictions for various time ranges: while a short-term prediction usually depends on high frequency trading patterns, a long-term prediction should focus more on the low frequency trading patterns targeting at long-term return. Unfortunately, no existing model explicitly distinguishes between various frequencies of trading patterns to make dynamic predictions in literature. The experiments on the real market data also demonstrate more competitive performance by the SFM as compared with the state-of-the-art methods.", "title": "" }, { "docid": "e404699c5b86d3a3a47a1f3d745eecc1", "text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.", "title": "" }, { "docid": "4d5bba781cac8b78040e7c3baeed4f3a", "text": "Area efficient architecture is today's major concern in the field of VLSI, Digital signal processing circuits, cryptographic algorithms, wireless communications and Internet of Things (IOT). Majority of the architectures use multiplication. Realization of multiplication by using repetitive addition and shift and add methods consumes more area, power and delay. Vedic is one of the efficient multipliers. Design of Vedic multiplier using different sutras reduces area and power. From the structure of Vedic multiplier, it is clearly observed that there is scope to design an efficient architecture. In this research, Vedic multiplier is designed using modified full adder which consumes less number of LUT's, slices and delay when compared to normal conventional Vedic multiplier. Simulation and synthesis are carried on XILINX ISE 12.2 software. FPGA results of the proposed multiplier show that number of LUT's is less by 13.8% in the modified Vedic Multiplier (4×4) and less by 7.5% in modified Vedic Multiplier(8×8). Delay is less by 10% in modified Vedic Multiplier (4×4) and 7.2 % in modified Vedic Multiplier (8×8).", "title": "" }, { "docid": "0b51b727f39a9c8ea6580794c6f1e2bb", "text": "Many researchers proposed different methodologies for the text skew estimation in binary images/gray scale images. They have been used widely for the skew identification of the printed text. There exist so many ways algorithms for detecting and correcting a slant or skew in a given document or image. Some of them provide better accuracy but are slow in speed, others have angle limitation drawback. So a new technique for skew detection in the paper, will reduce the time and cost. Keywords— Document image processing, Skew detection, Nearest-neighbour approach, Moments, Hough transformation.", "title": "" }, { "docid": "8291f269eacefae504ec5e981845b456", "text": "In this paper we present a new method for voice disorders classification based on multilayer neural network. The processing algorithm is based on a hybrid technique which uses the wavelets energy coefficients as input of the multilayer neural network. The training step uses a speech database of several pathological and normal voices collected from the national hospital “Rabta Tunis” and was conducted in a supervised mode for discrimination of normal and pathology voices and in a second step classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.", "title": "" }, { "docid": "0b33249df17737a826dcaa197adccb74", "text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.", "title": "" }, { "docid": "f848c7e4214bb1ee315dc87b2ee9df63", "text": "Engineers1, artists and craftsmen have long-established models to try out projects before running them. This makes the ideas and solutions proposed in the models clear enough to be perceived or understood. Similarly, the use of business process models can contribute to the specification of software requirements, facilitating the understanding and communication of the business from the point of view of software designers, as well as the managers. However, there is little research that examines whether BPMN business process models are more effective in their understanding than other representations, such as textual descriptions. This paper presents a research carried out with people from the management field, some who are familiar with and some who do not know BPMN, to verify if there are significant differences in terms of understanding the BPMN models in comparison to a textual representation. The results show that the lack of understanding of the BPMN models can reflect in a loss of communication between the managers, the ones who understand the business, and the developers, professionals of the area of computer science who consolidate the processes in software requirements.", "title": "" }, { "docid": "319ba1d449d2b65c5c58b5cc0fdbed67", "text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness", "title": "" } ]
scidocsrr
89ddf0267bababbcb596bafe7d4d8f64
Spatiotemporal urbanization processes in the megacity of Mumbai , India : A Markov chains-cellular automata urban growth model
[ { "docid": "09fa7cf836fb2a7559667c6061533177", "text": "This research analyses the suburban expansion in the metropolitan area of Tehran, Iran. A hybrid model consisting of logistic regression model, Markov chain (MC), and cellular automata (CA) was designed to improve the performance of the standard logistic regression model. Environmental and socio-economic variables dealing with urban sprawl were operationalised to create a probability surface of spatiotemporal states of built-up land use for the years 2006, 2016, and 2026. For validation, the model was evaluated by means of relative operating characteristic values for different sets of variables. The approach was arkov chain ellular automata ehran calibrated for 2006 by cross comparing of actual and simulated land use maps. The achieved outcomes represent a match of 89% between simulated and actual maps of 2006, which was satisfactory to approve the calibration process. Thereafter, the calibrated hybrid approach was implemented for forthcoming years. Finally, future land use maps for 2016 and 2026 were predicted by means of this hybrid approach. The simulated maps illustrate a new wave of suburban development in the vicinity of Tehran at the tropo western border of the me", "title": "" }, { "docid": "23c2ea4422ec6057beb8fa0be12e57b3", "text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "6eec78a8b8c58f2c9e28dfcb952a0e8f", "text": "Typical quadrotor aerial robots used in research weigh less than 3 kg and carry payloads measured in hundreds of grams. Several obstacles in design and control must be overcome to cater for expected industry demands that push the boundaries of existing quadrotor performance. The X-4 Flyer, a 4 kg quadrotor with a 1 kg payload, is intended to be prototypical of useful commercial quadrotors. The custom-built craft uses tuned plant dynamics with an onboard embedded attitude controller to stabilise flight. Independent linear SISO controllers were designed to regulate flyer attitude. The performance of the system is demonstrated in indoor and outdoor flight.", "title": "" }, { "docid": "3e442c589eb4b2501b6ed2a8f1774e73", "text": "Today, sensors are increasingly used for data collection. In the medical domain, for example, vital signs (e.g., pulse or oxygen saturation) of patients can be measured with sensors and used for further processing. In this paper, different types of applications will be discussed whether sensors might be used in the context of these applications and their suitability for applying external sensors to them. Furthermore, a system architecture for adding sensor technology to respective applications is presented. For this purpose, a real-world business application scenario in the field of well-being and fitness is presented. In particular, we integrated two different sensors in our fitness application. We report on the lessons learned from the implementation and use of this application, e.g., in respect to connection and data structure. They mainly deal with problems relating to the connection and communication between the smart mobile device and the external sensors, as well as the selection of the appropriate type of application. Finally, a robust sensor framework, arising from this fitness application is presented. This framework provides basic features for connecting sensors. Particularly, in the medical domain, it is crucial to provide an easy to use toolset to relieve medical staff.", "title": "" }, { "docid": "96344ccc2aac1a7e7fbab96c1355fa10", "text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.", "title": "" }, { "docid": "8ad32ea5af499e42f5c49b7736ff23c2", "text": "Skin cancer is a major public health problem, as is the most common type of cancer and represents more than half of cancer diagnoses worldwide. Early detection influences the outcome of the disease and motivates our work. We obtain the state of the art results for the ISBI 2016 Melanoma Classification Challenge (named Skin Lesion Analysis towards Melanoma Detection) facing the peculiarities of dealing with such a small, unbalanced, biological database. For that, we explore committees of Convolutional Neural Networks trained over the ISBI challenge training dataset artificially augmented by both classical image processing transforms and image warping guided by specialist knowledge about the lesion axis and improve the final classifier invariance to common melanoma variations.", "title": "" }, { "docid": "f1bbfe878069fcea9b7c7c24ee2e5b2d", "text": "We show that the MEMS gyroscopes found on modern smart phones are sufficiently sensitive to measure acoustic signals in the vicinity of the phone. The resulting signals contain only very low-frequency information (<200Hz). Nevertheless we show, using signal processing and machine learning, that this information is sufficient to identify speaker information and even parse speech. Since iOS and Android require no special permissions to access the gyro, our results show that apps and active web content that cannot access the microphone can nevertheless eavesdrop on speech in the vicinity of the phone.", "title": "" }, { "docid": "c1eb39f2c823a9c40041268b78a75e86", "text": "Distamycin binds the minor groove of duplex DNA at AT-rich regions and has been a valuable probe of protein interactions with double-stranded DNA. We ®nd that distamycin can also inhibit protein interactions with G-quadruplex (G4) DNA, a stable fourstranded structure in which the repeating unit is a G-quartet. Using NMR, we show that distamycin binds speci®cally to G4 DNA, stacking on the terminal G-quartets and contacting the ̄anking bases. These results demonstrate the utility of distamycin as a probe of G4 DNA±protein interactions and show that there are (at least) two distinct modes of protein±G4 DNA recognition which can be distinguished by sensitivity to distamycin.", "title": "" }, { "docid": "0687e28b42ca1acff99dc4917b920127", "text": "Advanced Synchronization Facility (ASF) is an AMD64 hardware extension for lock-free data structures and transactional memory. It provides a speculative region that atomically executes speculative accesses in the region. Five new instructions are added to demarcate the region, use speculative accesses selectively, and control the speculative hardware context. Programmers can use speculative regions to build flexible multi-word atomic primitives with no additional software support by relying on the minimum guarantee of available ASF hardware resources for lock-free programming. Transactional programs with high-level TM language constructs can either be compiled directly to the ASF code or be linked to software TM systems that use ASF to accelerate transactional execution. In this paper we develop an out-of-order hardware design to implement ASF on a future AMD processor and evaluate it with an in-house simulator. The experimental results show that the combined use of the L1 cache and the LS unit is very helpful for the performance robustness of ASF-based lock free data structures, and that the selective use of speculative accesses enables transactional programs to scale with limited ASF hardware resources.", "title": "" }, { "docid": "36b609f1c748154f0f6193c6578acec9", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ff163abbdfa5db81f54fc42aa52ab0c3", "text": "Drawing on the self-system model, this study conceptualized school engagement as a multidimensional construct, including behavioral, emotional, and cognitive engagement, and examined whether changes in the three types of school engagement related to changes in problem behaviors from 7th through 11th grades (approximately ages 12-17). In addition, a transactional model of reciprocal relations between school engagement and problem behaviors was tested to predict school dropout. Data were collected on 1,272 youth from an ethnically and economically diverse county (58% African American, 36% European American; 51% females). Results indicated that adolescents who had declines in behavioral and emotional engagement with school tended to have increased delinquency and substance use over time. There were bidirectional associations between behavioral and emotional engagement in school and youth problem behaviors over time. Finally, lower behavioral and emotional engagement and greater problem behaviors predicted greater likelihood of dropping out of school.", "title": "" }, { "docid": "33dedeabc83271223a1b3fb50bfb1824", "text": "Quantum computers can be used to address electronic-structure problems and problems in materials science and condensed matter physics that can be formulated as interacting fermionic problems, problems which stretch the limits of existing high-performance computers. Finding exact solutions to such problems numerically has a computational cost that scales exponentially with the size of the system, and Monte Carlo methods are unsuitable owing to the fermionic sign problem. These limitations of classical computational methods have made solving even few-atom electronic-structure problems interesting for implementation using medium-sized quantum computers. Yet experimental implementations have so far been restricted to molecules involving only hydrogen and helium. Here we demonstrate the experimental optimization of Hamiltonian problems with up to six qubits and more than one hundred Pauli terms, determining the ground-state energy for molecules of increasing size, up to BeH2. We achieve this result by using a variational quantum eigenvalue solver (eigensolver) with efficiently prepared trial states that are tailored specifically to the interactions that are available in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine. We demonstrate the flexibility of our approach by applying it to a problem of quantum magnetism, an antiferromagnetic Heisenberg model in an external magnetic field. In all cases, we find agreement between our experiments and numerical simulations using a model of the device with noise. Our results help to elucidate the requirements for scaling the method to larger systems and for bridging the gap between key problems in high-performance computing and their implementation on quantum hardware.", "title": "" }, { "docid": "bfa5d103730825ee82f7efdc8c135004", "text": "The 'default network' is defined as a set of areas, encompassing posterior-cingulate/precuneus, anterior cingulate/mesiofrontal cortex and temporo-parietal junctions, that show more activity at rest than during attention-demanding tasks. Recent studies have shown that it is possible to reliably identify this network in the absence of any task, by resting state functional magnetic resonance imaging connectivity analyses in healthy volunteers. However, the functional significance of these spontaneous brain activity fluctuations remains unclear. The aim of this study was to test if the integrity of this resting-state connectivity pattern in the default network would differ in different pathological alterations of consciousness. Fourteen non-communicative brain-damaged patients and 14 healthy controls participated in the study. Connectivity was investigated using probabilistic independent component analysis, and an automated template-matching component selection approach. Connectivity in all default network areas was found to be negatively correlated with the degree of clinical consciousness impairment, ranging from healthy controls and locked-in syndrome to minimally conscious, vegetative then coma patients. Furthermore, precuneus connectivity was found to be significantly stronger in minimally conscious patients as compared with unconscious patients. Locked-in syndrome patient's default network connectivity was not significantly different from controls. Our results show that default network connectivity is decreased in severely brain-damaged patients, in proportion to their degree of consciousness impairment. Future prospective studies in a larger patient population are needed in order to evaluate the prognostic value of the presented methodology.", "title": "" }, { "docid": "280c39aea4584e6f722607df68ee28dc", "text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.", "title": "" }, { "docid": "1c8ae6e8d46e95897a9bd76e09fd28aa", "text": "Skin diseases are very common in our daily life. Due to the similar appearance of skin diseases, automatic classification through lesion images is quite a challenging task. In this paper, a novel multi-classification method based on convolutional neural network (CNN) is proposed for dermoscopy images. A CNN network with nested residual structure is designed first, which can learn more information than the original residual structure. Then, the designed network are trained through transfer learning. With the trained network, 6 kinds of lesion diseases are classified, including nevus, seborrheic keratosis, psoriasis, seborrheic dermatitis, eczema and basal cell carcinoma. The experiments are conducted on six-classification and two-classification tasks, and with the accuracies of 65.8% and 90% respectively, our method greatly outperforms other 4 state-of-the-art networks and the average of 149 professional dermatologists.", "title": "" }, { "docid": "262f1e965b311bf866ef5b924b6085a7", "text": "By considering the amount of uncertainty perceived and the willingness to bear uncertainty concomitantly, we provide a more complete conceptual model of entrepreneurial action that allows for examination of entrepreneurial action at the individual level of analysis while remaining consistent with a rich legacy of system-level theories of the entrepreneur. Our model not only exposes limitations of existing theories of entrepreneurial action but also contributes to a deeper understanding of important conceptual issues, such as the nature of opportunity and the potential for philosophical reconciliation among entrepreneurship scholars.", "title": "" }, { "docid": "7010278254ee0fadb7b59cb05169578a", "text": "INTRODUCTION\nLumbar disc herniation (LDH) is a common condition in adults and can impose a heavy burden on both the individual and society. It is defined as displacement of disc components beyond the intervertebral disc space. Various conservative treatments have been recommended for the treatment of LDH and physical therapy plays a major role in the management of patients. Therapeutic exercise is effective for relieving pain and improving function in individuals with symptomatic LDH. The aim of this systematic review is to evaluate the effectiveness of motor control exercise (MCE) for symptomatic LDH.\n\n\nMETHODS AND ANALYSIS\nWe will include all clinical trial studies with a concurrent control group which evaluated the effect of MCEs in patients with symptomatic LDH. We will search PubMed, SCOPUS, PEDro, SPORTDiscus, CINAHL, CENTRAL and EMBASE with no restriction of language. Primary outcomes of this systematic review are pain intensity and functional disability and secondary outcomes are functional tests, muscle thickness, quality of life, return to work, muscle endurance and adverse events. Study selection and data extraction will be performed by two independent reviewers. The assessment of risk of bias will be implemented using the PEDro scale. Publication bias will be assessed by funnel plots, Begg's and Egger's tests. Heterogeneity will be evaluated using the I2 statistic and the χ2 test. In addition, subgroup analyses will be conducted for population and the secondary outcomes. All meta-analyses will be performed using Stata V.12 software.\n\n\nETHICS AND DISSEMINATION\nNo ethical concerns are predicted. The systematic review findings will be published in a peer-reviewed journal and will also be presented at national/international academic and clinical conferences.\n\n\nTRIAL REGISTRATION NUMBER\nCRD42016038166.", "title": "" }, { "docid": "4408d5fa31a64d54fbe4b4d70b18182b", "text": "Using microarray analysis, this study showed up-regulation of toll-like receptors 1, 2, 4, 7, 8, NF-κB, TNF, p38-MAPK, and MHC molecules in human peripheral blood mononuclear cells following infection with Plasmodium falciparum. This analysis reports herein further studies based on time-course microarray analysis with focus on malaria-induced host immune response. The results show that in early malaria, selected immune response-related genes were up-regulated including α β and γ interferon-related genes, as well as genes of IL-15, CD36, chemokines (CXCL10, CCL2, S100A8/9, CXCL9, and CXCL11), TRAIL and IgG Fc receptors. During acute febrile malaria, up-regulated genes included α β and γ interferon-related genes, IL-8, IL-1b IL-10 downstream genes, TGFB1, oncostatin-M, chemokines, IgG Fc receptors, ADCC signalling, complement-related genes, granzymes, NK cell killer/inhibitory receptors and Fas antigen. During recovery, genes for NK receptorsand granzymes/perforin were up-regulated. When viewed in terms of immune response type, malaria infection appeared to induce a mixed TH1 response, in which α and β interferon-driven responses appear to predominate over the more classic IL-12 driven pathway. In addition, TH17 pathway also appears to play a significant role in the immune response to P. falciparum. Gene markers of TH17 (neutrophil-related genes, TGFB1 and IL-6 family (oncostatin-M)) and THαβ (IFN-γ and NK cytotoxicity and ADCC gene) immune response were up-regulated. Initiation of THαβ immune response was associated with an IFN-αβ response, which ultimately resulted in moderate-mild IFN-γ achieved via a pathway different from the more classic IL-12 TH1 pattern. Based on these observations, this study speculates that in P. falciparum infection, THαβ/TH17 immune response may predominate over ideal TH1 response.", "title": "" }, { "docid": "c9e47fe895b3f3f1f65a66c05ff95224", "text": "Facebook has increasingly incorporated graphical means of communication such as emoticons, emoji, stickers, GIFs, images, and videos (‘graphicons’) into comment threads. Adapting methods of computer‐ mediated discourse analysis, we analyze the frequency and pragmatic functions of each graphicon type in threads sampled from public graphicon-focused Facebook groups. Six main functions emerged from the data: mention, reaction, tone modification, riffing, action, and narrative sequence. Reaction was most common, and emoji expressed the widest array of functions. We propose structural, social, and technical explanations for variation in graphicon use, and suggest improvements for the design of conversational graphical elements in social media systems.", "title": "" }, { "docid": "b342443400c85277d4f980a39198ded0", "text": "We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS’ parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of ≈ 1 KB), shorter signatures (≈ 30 KB instead of 41 KB), and faster signing and verification for the same security level as SPHINCS.", "title": "" }, { "docid": "67f73a57040f6d2a5ea79d7ad2693f2f", "text": "This protocol details a method to immunostain organotypic slice cultures from mouse hippocampus. The cultures are based on the interface method, which does not require special equipment, is easy to execute and yields slice cultures that can be imaged repeatedly, from the time of isolation at postnatal day 6–9 up to 6 months in vitro. The preserved tissue architecture facilitates the analysis of defined hippocampal synapses, cells and entire projections. Time-lapse imaging is based on transgenes expressed in the mice or on constructs introduced through transfection or viral vectors; it can reveal processes that develop over periods ranging from seconds to months. Subsequent to imaging, the slices can be processed for immunocytochemistry to collect further information about the imaged structures. This protocol can be completed in 3 d.", "title": "" }, { "docid": "2eebc7477084b471f9e9872ba8751359", "text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.", "title": "" } ]
scidocsrr
c76af97834c292d78b4827a3b4889320
Fuzzy Krill Herd (FKH): An improved optimization algorithm
[ { "docid": "09f19a5e4751dc3ee4aa38817aafd3cf", "text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013", "title": "" } ]
[ { "docid": "43b6ec88be3f245047c344bf4366f4b4", "text": "In this item, we approached one of the new theories of the economic development, the theory of competitive advantage. The theory of competitive advantage was created by Michael E. Porter, starting from the actual economic reality which could no longer be explained on the basis of the model of comparative advantages, elaborated by David Ricardo. In order to conceive this theory, Porter analyzed four years, ten countries with important share in international commerce (Denmark, Germany, Italy, Japan, South Korea, Singapore, Sweden, Switzerland, Great Britain and USA), establishing the system of the determinants which determine the obtaining of the competitive advantage. Starting from describing the system of the determinants, the so-called “diamond”, we analyzed detailed these determinants of the diamond: the factorial ones, the demand ones, upstream and downstream industries and the domestic competition, and also the chance and the governmental policy. After analyzing and classifying the structure of these determinants, we approach their interaction the dynamics of the diamond by identifying the stages of the development which a country goes through and the features of each stage. In the last part of the article, we enumerated the causes that might lead to loosing the competitive advantage and the position as a leader on the market, and a few critics brought to this new theory. 1. The presentation of the theory of Porter The theory of the competitive advantage starts from the principle that the only important concept at the national level is the national productivity (Fota Constantin, 2004). In the elaboration of his theory, Porter starts from the following premises (Porter Michael, 1990): the nature of the competition and the sources of competitive advantage are very different among industries and even among the segments of the same industry, and a certain country can influence the obtaining of the competitive advantage within a certain sector of industry; the globalisation of the competition and the appearance of the trans-national companies do not eliminate the influence of a certain country for getting the competitive advantage ; a country can offer different compatitive advantages for a company, depending if it is an origin country or a host country; the competitivity has a dynamic character (Schumpeter); the innovations have a role of leading force in this permanent change and determine the companies to invest on order not to be eliminated from the market (Negriţoiu Mişu, 1997 ). Starting from these premises, Porter identifies a system of determinants which is the basis for getting competitive advantages by the nations. 2. The system of determinants The theory is based on the system of determinants, called by Porter „diamond”, which consists of:", "title": "" }, { "docid": "4941250a228f9494480d8dd175490671", "text": "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.", "title": "" }, { "docid": "6c007a6e1a40f5f798d619fed9e9d5c9", "text": "The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures.", "title": "" }, { "docid": "985e8fae88a81a2eec2ca9cc73740a0f", "text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.", "title": "" }, { "docid": "81f6c52bb579645e5919eac629c90f6d", "text": "A DEA-based stochastic estimation framework is presented to evaluate contextual variables affecting productivity. Conditions are identified under which a two-stage procedure consisting of DEA followed by regression analysis yields consistent estimators of the impact of contextual variables. Conditions are also identified under which DEA in the first stage followed by maximum likelihood estimation in the second stage yields consistent estimators of the impact of contextual variables. Monte Carlo simulations are carried out to compare the performance of our two-stage approach with one-stage and two-stage parametric approaches. Simulation results suggest that DEA-based procedures perform as well as the best parametric method in the estimation of the impact of contextual variables on productivity. Simulation results also indicate that DEA-based procedures perform better than parametric methods in the estimation of individual decision making unit (DMU) productivity. (", "title": "" }, { "docid": "37c6bfe47aebda45109eaa24b48d4c01", "text": "Accumulated pharyngo-laryngeal secretions have been associated with aspiration and pneumonia. While traditional secretion scales evaluate location and amount, the eight-point New Zealand Secretion Scale (NZSS) uniquely encompasses a responsiveness subcomponent. This prospective observational study investigated the predictive value of NZSS for aspiration and pneumonia. Consecutive inpatients (N:180) referred for flexible endoscopic evaluation of swallowing (FEES) were recruited (neurological 49%, critical care 31%, structural 15%, other 5% etiologies). Mean age was 63 years (range 18–95 years, S.D. 18). A standardized protocol was completed on 264 FEES (180 first FEES, 84 repeat FEES). Penetration-aspiration scale (PAS) (ICC = .89) and NZSS (ICC = .91) were independently scored by two raters. Aspiration of food and/or fluids occurred in 36% of FEES; 24% silently. Median NZSS was 3 (range 0–7); with silent aspiration of secretions in 33% of FEES. There was a significant correlation between NZSS and PAS (R = .37, p < .001). Incidence of pneumonia during admission was 46% and was significantly associated with PAS (p < .001), NZSS (p < .001), age (p < .001), and tracheostomy (p < .001). Of those who developed pneumonia, 33% had both high PAS (>5) and high NZSS (>4). Eleven percent of those who developed pneumonia had an elevated NZSS (>4) in the absence of aspiration (PAS < 6). This large study reports the significant relationship between accumulated secretions, airway responsiveness, and pneumonia. This comprehensive scale is a useful tool when carrying out endoscopic evaluation and has the potential to predict pneumonia in patients irrespective of their aspiration status.", "title": "" }, { "docid": "1e9f334f077536ed161f60ca8f58637f", "text": "It is well known that clinicians experience distress and grief in response to their patients' suffering. Oncologists and palliative care specialists are no exception since they commonly experience patient loss and are often affected by unprocessed grief. These emotions can compromise clinicians' personal well-being, since unexamined emotions may lead to burnout, moral distress, compassion fatigue, and poor clinical decisions which adversely affect patient care. One approach to mitigate this harm is self-care, defined as a cadre of activities performed independently by an individual to promote and maintain personal well-being throughout life. This article emphasizes the importance of having a self-care and self-awareness plan when caring for patients with life-limiting cancer and discusses validated methods to increase self-care, enhance self-awareness and improve patient care.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "93d06eafb15063a7d17ec9a7429075f0", "text": "Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.", "title": "" }, { "docid": "7410cc6d6335d7bfc4b720ac429d0e85", "text": "This paper provides examples from the last fifty years of scientific and technological innovations that provide relatively easy, quick and affordable means of addressing key water management issues. Scientific knowledge and technological innovation can help open up previously closed decision-making systems. Four of these tools are discussed in this paper: a) the opportunities afforded by virtual water trade; b) the silent revolution for beneficial use of groundwater; c) salt water desalination; and finally, d) the use of remote sensing and geographic information systems (GIS). Together these advances are changing the options available to address water and food security that have been predominant for centuries in the minds of most water decision-makers.", "title": "" }, { "docid": "bfae60b46b97cf2491d6b1136c60f6a6", "text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.", "title": "" }, { "docid": "902dcff3ac210e00e205bf219756a944", "text": "This paper proposes a new iterative receiver for single-carrier multiple-input–multiple-output (SC-MIMO) underwater acoustic (UWA) communications, which utilizes frequency-domain turbo equalization (FDTE) and iterative channel estimation. Soft-decision symbols are not only fed back to the equalizer to cancel the intersymbol interference (ISI) and cochannel interference (CCI), but also used as training signals in the channel estimator to update the estimated channel state information (CSI) after each turbo iteration. This iterative channel estimation scheme helps to combat the problem commonly suffered by block-processing receivers in fast time-varying channels. Compared with time-domain turbo equalization, FDTE achieves comparable performance with significantly reduced computational complexity. Using soft-decision symbols to reestimate the time-varying channels, iterative channel estimation further improves the accuracy of the estimated CSI. The proposed iterative receiver has been verified through undersea experimental data collected in the Surface Processes and Acoustic Communications Experiment 2008 (SPACE08).", "title": "" }, { "docid": "6dbfefb384a3dbd28beee2d0daebae52", "text": "Many NLP applications require disambiguating polysemous words. Existing methods that learn polysemous word vector representations involve first detecting various senses and optimizing the sensespecific embeddings separately, which are invariably more involved than single sense learning methods such as word2vec. Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings. In this paper, we propose a simple method to learn a word representation, given any context. Our method only requires learning the usual single sense representation, and coefficients that can be learnt via a single pass over the data. We propose several new test sets for evaluating word sense induction, relevance detection, and contextual word similarity, significantly supplementing the currently available tests. Results on these and other tests show that while our method is embarrassingly simple, it achieves excellent results when compared to the state of the art models for unsupervised polysemous word representation learning. Our code and data are at https://github.com/dingwc/", "title": "" }, { "docid": "989a16f498eaaa62d5578cc1bcc8bc04", "text": "UML activity diagram is widely used to describe the behavior of the software system. Unfortunately, there is still no practical tool to verify the UML diagrams automatically. This paper proposes an alternative to translate UML activity diagram into a colored petri nets with inscription. The model translation rules are proposed to guide the automatic translation of the activity diagram with atomic action into a CPN model. Moreover, the relevant basic arc inscriptions are generated without manual elaboration. The resulting CPN with inscription is correctly verified as expected.", "title": "" }, { "docid": "54477e35cf5cfcfc61e4dc675449a068", "text": "Nowadays the amount of data that is being generated every day is increasing in a high level for various sectors. In fact, this volume and diversity of data push us to think wisely for a better solution to store, process and analyze it in the right way. Taking into consideration the healthcare industry, there is a great benefit for using the concept of big data, due to the diversity of data that we are dealing with, the extant, and the velocity which lead us to think about providing the best care for the patients. In this paper, we aim to present a new architecture model for health data. The framework supports the storage and the management of unstructured medical data in a distributed environment based on multi-agent paradigm. The integration of the mobile agent model into hadoop ecosystem will give us the opportunity to enable instant communication process between multiple health repositories.", "title": "" }, { "docid": "f9fcaf54f908a11e165173c96334fb5e", "text": "Axial flux-segmented rotor-switched reluctance motor (SSRM) topology could be a potential candidate for in-wheel electric vehicle application. This topology has the advantage of the increased active surface area for the torque production as compared to the radial flux SSRM for a given volume. To improve the performance of axial flux SSRM (AFSSRM), various stator slot/rotor segment combinations and winding polarities are studied. It is observed that the torque ripple is high for the designed three-phase, 12/8 pole AFSSRM. Therefore, the influence of the stator pole and rotor segment arc angles on the average torque and the torque ripple is studied. In addition, the adjacent rotor segments are displaced with respect to the stator, to reduce the torque dips in the phase commutation region. The proposed arrangement is analyzed using the quasi-3-D finite-element method-based simulation study and it is found that the torque ripple can be reduced by 38%. Furthermore, the low-frequency harmonic content in the torque output is analyzed and compared. The variation of the axial electromagnetic attractive force with displaced rotor segments is discussed. The effectiveness of the proposed technique is verified experimentally.", "title": "" }, { "docid": "5aaba72970d1d055768e981f7e8e3684", "text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.", "title": "" }, { "docid": "9d29198002d601cc3d84f3c159c0b975", "text": "Avatar is a system that leverages cloud resources to support fast, scalable, reliable, and energy efficient distributed computing over mobile devices. An avatar is a per-user software entity in the cloud that runs apps on behalf of the user's mobile devices. The avatars are instantiated as virtual machines in the cloud that run the same operating system with the mobile devices. In this way, avatars provide resource isolation and execute unmodified app components, which simplifies technology adoption. Avatar apps execute over distributed and synchronized (mobile device, avatar) pairs to achieve a global goal. The three main challenges that must be overcome by the Avatar system are: creating a high-level programming model and a middleware that enable effective execution of distributed applications on a combination of mobile devices and avatars, re-designing the cloud architecture and protocols to support billions of mobile users and mobile apps with very different characteristics from the current cloud workloads, and explore new approaches that balance privacy guarantees with app efficiency/usability. We have built a basic Avatar prototype on Android devices and Android x86 virtual machines. An application that searches for a lost child by analyzing the photos taken by people at a crowded public event runs on top of this prototype.", "title": "" }, { "docid": "d951213566aad8dc219669f915ca8612", "text": "This study examines the relationship between demographic diversity on boards of directors with firm financial performance. This relationship is examined using 1993 and 1998 financial performance data (return on asset and investment) and the percentage of women and minorities on boards of directors for 127 large US companies. Correlation and regression analyses indicate board diversity is positively associated with these financial indicators of firm performance. Implications for both strategic human resource management and future research are discussed.", "title": "" }, { "docid": "1a46e6f4a907abbab02ea2d63a4f00a8", "text": "A design decomposition-integration model, named COPE, is proposed in which Axiomatic Design Matrices (DM) map Functional Requirements to Design Parameters while Design Structure Matrices (DSM) provide structured representation of the system development context. In COPE, the DM and the DSM co-evolve. Traversing between the two types of matrices allows for some control in the application of the system knowledge which surrounds the decision making process and the definition of the system architecture. It is argued that this approach describes better the design process of complex products which is constrained by the need to utilise existing manufacturing processes, to apply discrete technological innovations and to accommodate work-share and supply chain agreements. Presented is an industrial case study which demonstrated the feasibility of the model. © 2004 Wiley Periodicals, Inc. Syst Eng 8: 29–40, 2005", "title": "" } ]
scidocsrr
00bb15f562b879cc49aea842de415b59
Patterns of Emergent Leadership in Virtual Teams
[ { "docid": "e4d38d8ef673438e9ab231126acfda99", "text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER", "title": "" } ]
[ { "docid": "e603b32746560887bdd6dbcfdc2e1c28", "text": "A systematic review of self-report family assessment measures was conducted with reference to their psychometric properties, clinical utility and theoretical underpinnings. Eight instruments were reviewed: The McMaster Family Assessment Device (FAD); Circumplex Model Family Adaptability and Cohesion Evaluation Scales (FACES); Beavers Systems Model Self-Report Family Inventory (SFI); Family Assessment Measure III (FAM III); Family Environment Scale (FES); Family Relations Scale (FRS); and Systemic Therapy Inventory of Change (STIC); and the Systemic Clinical Outcome Routine Evaluation (SCORE). Results indicated that five family assessment measures are suitable for clinical use (FAD, FACES-IV, SFI, FAM III, SCORE), two are not (FES, FRS), and one is a new system currently under-going validation (STIC).", "title": "" }, { "docid": "931c7ce54ed22a838a5b2b44c9182a4c", "text": "This is the second-part paper of the survey on fault diagnosis and fault-tolerant techniques, where fault diagnosis methods and applications are overviewed, respectively, from the knowledge-based and hybrid/active viewpoints. With the aid of the first-part survey paper, the second-part review paper completes a whole overview on fault diagnosis techniques and their applications. Comments on the advantages and constraints of various diagnosis techniques, including model-based, signal-based, knowledge-based, and hybrid/active diagnosis techniques, are also given. An overlook on the future development of fault diagnosis is presented.", "title": "" }, { "docid": "ff3867a1c0ee1d3f1e61cb306af37bb1", "text": "Introduction: The mucocele is one of the most common benign soft tissue masses that occur in the oral cavity. Mucoceles (mucus and coele cavity), by definition, are cavities filled with mucus. Two types of mucoceles can appear – extravasation type and retention type. Diagnosis is mostly based on clinical findings. The common location of the extravasation mucocele is the lower lip and the treatment of choice is surgical removal. This paper gives an insight into the phenomenon and a case report has been presented. Case report: Twenty five year old femalepatient reported with chief complaint of small swelling on the left side of the lower lip since 2 months. The swelling was diagnosed as extravasation mucocele after history and clinical examination. The treatment involved surgical excision of tissue and regular follow up was done to check for recurrence. Conclusion: The treatment of lesion such as mucocele must be planned taking into consideration the various clinical parameters and any oral habits as these lesions have a propensity of recurrence.", "title": "" }, { "docid": "bc6bc98e683fe4bbd7978d59ecd91a7a", "text": "The explosion of enhanced applications such as live video streaming, video gaming and Virtual Reality calls for efforts to optimize transport protocols to manage the increasing amount of data traffic on future 5G networks. Through bandwidth aggregation over multiple paths, the Multi-Path Transmission Control Protocol (MPTCP) can enhance the performance of network applications. MPTCP can split a large multimedia flow into subflows and apply a congestion control mechanism on each subflow. Segment Routing (SR), a promising source routing approach, has emerged to provide advanced packet forwarding over 5G networks. In this paper, we explore the utilization of MPTCP and SR in SDN-based networks to improve network resources utilization and end- user's QoE for delivering multimedia services over 5G networks. We propose a novel QoE-aware, SDN- based MPTCP/SR approach for service delivery. In order to demonstrate the feasibility of our approach, we implemented an intelligent QoE- centric Multipath Routing Algorithm (QoMRA) on an SDN source routing platform using Mininet and POX controller. We carried out experiments on Dynamic Adaptive video Steaming over HTTP (DASH) applications over various network conditions. The preliminary results show that, our QoE-aware SDN- based MPTCP/SR scheme performs better compared to the conventional TCP approach in terms of throughput, link utilization and the end-user's QoE.", "title": "" }, { "docid": "7e3df0603a924b7e2641293e880c1f70", "text": "Automatic pain intensity estimation from facial images is challenging mainly because of high variability in subject-specific pain expressiveness. This heterogeneity in the subjects causes their facial appearance to vary significantly when experiencing the same pain level. The standard classification methods (e.g., SVMs) do not provide a principled way of accounting for this heterogeneity. To this end, we propose the heteroscedastic Conditional Ordinal Random Field (CORF) model for automatic estimation of pain intensity. This model generalizes the CORF framework for modeling sequences of ordinal variables, by adapting it for heteroscedasticity. This is attained by allowing the variance in the ordinal probit model in the CORF to change depending on the input features, resulting in the model able to adapt to the pain expressiveness level specific to each subject. Our experimental results on the UNBC Shoulder Pain Database show that modeling heterogeneity in the subjects with the framework of CORFs improves the pain intensity estimation attained by the standard CORF model, and the other commonly used classification models.", "title": "" }, { "docid": "11b20602fc9d6e97a5bcc857da7902d0", "text": "This research investigates the Quality of Service (QoS) interaction at the edge of differentiated service (DiffServ) domain, denoted by video gateway (VG). VG is responsible for coordinating the QoS mapping between video applications and DiffServ enabled network. To accomplish the goal of achieving economical and high-quality end-to-end video streaming, which utilizes its awareness of relative service differentiation, the proposed QoS control framework includes the following three components: 1) the relative priority based indexing and categorization of streaming video content at sender, 2) the differentiated QoS levels with load variation in DiffServ networks, and 3) the feedforward and feedback mechanisms assisting QoS mapping of categorized index to DS level at the proposed VG. Especially, we focus on building a framework for dynamic QoS mapping, which intends to overcome both the QoS demand variations of CM applications (e.g., varying priorities from aggregated/categorized packets) and the QoS supply variations of DiffServ network (e.g., varying loss/delay due to fluctuating network loads). Thus, with the proposed QoS controls in both feedforward and feedback fashion, enhanced quality provisioning for CM applications (especially video streaming) is investigated under the given pricing model (e.g., DS level differentiated price/packet).", "title": "" }, { "docid": "b4a4b39d3f4c643249d467d9c2a8eeb1", "text": "The purpose of this study was to use computational fluid dynamics software Fluent, the numerical simulation to discuss the impact pressure within the flow channel device for flat radiator fan heat flow characteristics. Piezoelectric fan was placed in front of the flow channel, and the cold air flow was introduced to the radiator cooling fan, and thus generated vortex fluid in the flow path around the radiator to enhance the mixing of hot and cold fluid. Numerical simulation of various parameters were including the front ends of piezoelectric fan to the front ends of radiator (Lg), the height from piezoelectric fan to the bottom of the rectangular channel (Hw), and the number of fins (n), dual-piezoelectric fans, one was in the same direction and the other was in 180Phase delay. The results showed that the dual-piezoelectric fans installed in the flow channel can effectively enhance the Nusselt value of plate-type radiator. As the height (Hw) of single and dual piezoelectric fans was 15mm, and distance to the front of the radiator (Lg) was 5mm and at dualpiezoelectric fan space (a) was 20mm, the Nusselt value of this installation location had best result. And as Fin number was 10 and 14, the fin number 10 was superior to fin number 14, because thermal performance in the larger space of channel had better thermal performance.", "title": "" }, { "docid": "e39ef24a7fbf9cb3749665bbbd9260c1", "text": "Purpose: The application of self-expanding metallic endoprostheses (stents) to treat symptomatic pelvic venous spurs as an alternative to surgery. Methods: Wallstents with a diameter from 14 to 16 mm and one Cragg stent were placed in the left common iliac vein of eight patients (seven women, one man; mean age 42 years) with a symptomatic pelvic venous spur (left deep venous thrombosis or post-thrombotic leg swelling). Four patients had surgical thrombectomy prior to stent placement. Results: Technical success with immediate reduction of left leg circumference was achieved in all eight patients. A primary patency rate of 100% was observed during an average follow-up of 3 years (range 10–121 months). There were no procedural or stent-related complications. Conclusion: The percutaneous transfemoral placement of self-expanding metallic stents is an effective minimally invasive alternative to surgery in the treatment of symptomatic pelvic venous spur.", "title": "" }, { "docid": "7f5d0d1c61f4eaaddb2a073a6575908e", "text": "Depth image-based rendering (DIBR) is generally used to synthesize virtual view images in free viewpoint television (FTV) and three-dimensional (3-D) video. One of the main problems in DIBR is how to fill the holes caused by disocclusion regions and inaccurate depth values. In this paper, we propose a new hole filling method using a depth based in-painting technique. Experimental results show that the proposed hole filling method provides improved rendering quality both objectively and subjectively.", "title": "" }, { "docid": "05e4168615c39071bb9640bd5aa6f3d9", "text": "The intestinal microbiome plays an important role in the metabolism of chemical compounds found within food. Bacterial metabolites are different from those that can be generated by human enzymes because bacterial processes occur under anaerobic conditions and are based mainly on reactions of reduction and/or hydrolysis. In most cases, bacterial metabolism reduces the activity of dietary compounds; however, sometimes a specific product of bacterial transformation exhibits enhanced properties. Studies on the metabolism of polyphenols by the intestinal microbiota are crucial for understanding the role of these compounds and their impact on our health. This review article presents possible pathways of polyphenol metabolism by intestinal bacteria and describes the diet-derived bioactive metabolites produced by gut microbiota, with a particular emphasis on polyphenols and their potential impact on human health. Because the etiology of many diseases is largely correlated with the intestinal microbiome, a balance between the host immune system and the commensal gut microbiota is crucial for maintaining health. Diet-related and age-related changes in the human intestinal microbiome and their consequences are summarized in the paper.", "title": "" }, { "docid": "5f45659c16ca98f991a31d62fd70cdab", "text": "Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.", "title": "" }, { "docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4", "text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.", "title": "" }, { "docid": "1ca64eac6aaa34e114f5fb7d20b986b4", "text": "Circumstances that led to the development of the Theory: The SCT has its origins in the discipline of psychology, with its early foundation being laid by behavioral and social psychologists. The SLT evolved under the umbrella of behaviorism, which is a cluster of psychological theories intended to explain why people and animals behave the way that they do. Behaviorism, introduced by John Watson in 1913, took an extremely mechanistic approach to understanding human behavior. According to Watson, behavior could be explained in terms of observable acts that could be described by stimulus-response sequences (Crosbie-Brunett and Lewis, 1993; Thomas, 1990). Also central to behaviorist study was the notion that contiguity between stimulus and response determined the likelihood that learning would occur.", "title": "" }, { "docid": "4e0ac68997acd5fdc7276ba80ae04fe3", "text": "In this work, substrate integrated waveguide (SIW) bandpass filters were designed and fabricated using LTCC process. The proposed scheme consists of SIW cavities with coupling slots and coplanar waveguide (CPW) transitions. To reduce the component size and surface occupation, three SIW cavities are laminated vertically. Both horizontal transition and vertical transition are used between the two CPW transmission lines. Based on the SICCAS-K70D LTCC material (εr = 66, tanδ = 0.002 @3.5 GHz), an S-band bandpass filter with a center frequency of 2.59 GHz was designed and fabricated using the in-house developed LTCC material and process.", "title": "" }, { "docid": "418e29af01be9655c06df63918f41092", "text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.", "title": "" }, { "docid": "2dd8b7004f45ae374a72e2c7d40b0892", "text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.", "title": "" }, { "docid": "95fe3badecc7fa92af6b6aa49b6ff3b2", "text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.", "title": "" }, { "docid": "8925f16c563e3f7ab666efe58076ee59", "text": "An incomplete method for solving the propositional satisfiability problem (or a general constraint satisfaction problem) is one that does not provide the guarantee that it will eventually either report a satisfying assignment or declare that the given formula is unsatisfiable. In practice, most such methods are biased towards the satisfiable side: they are typically run with a pre-set resource limit, after which they either produce a valid solution or report failure; they never declare the formula to be unsatisfiable. These are the kind of algorithms we will discuss in this chapter. In complexity theory terms, such algorithms are referred to as having one-sided error. In principle, an incomplete algorithm could instead be biased towards the unsatisfiable side, always providing proofs of unsatisfiability but failing to find solutions to some satisfiable instances, or be incomplete with respect to both satisfiable and unsatisfiable instances (and thus have two-sided error). Unlike systematic solvers often based on an exhaustive branching and backtracking search, incomplete methods are generally based on stochastic local search, sometimes referred to as SLS. On problems from a variety of domains, such incomplete methods for SAT can significantly outperform DPLL-based methods. Since the early 1990’s, there has been a tremendous amount of research on designing, understanding, and improving local search methods for SAT. There have also been attempts at hybrid approaches that explore combining ideas from DPLL methods and local search techniques [e.g. 39, 68, 84, 88]. We cannot do justice to all recent research in local search solvers for SAT, and will instead try to provide a brief overview and touch upon some interesting details. The interested reader is encouraged to further explore the area through some of the nearly a hundred publications we cite along the way. We begin the chapter by discussing two methods that played a key role in the success of local search for satisfiability, namely GSAT [98] and Walksat [95]. We will then discuss some extensions of these ideas, in particular clause weighting", "title": "" }, { "docid": "1981aa894ee84501115a31f1a602e236", "text": "Introduction: Vascular abnormalities are relatively uncommon lesions, but head and neck is a common region for vascular malformation which is classified as benign tumors. In this paper, the authors report a rare presentation of vascular malformation in the tongue and its managements. Case Report: An 18 months 2 old child presented with a giant mass of tongue which caused functional and aesthetic problem. The rapid growth pattern of cavernous hemangioma was refractory to corticosteroid. The lesion was excised without any complication. Since the mass was so huge that not only filled entire oral cavity but was protruding outside, airway management was a great challenge for anesthesia plan and at the same time surgical technique was difficult to select. Conclusion: Despite different recommended modalities in managing hemangiomas of the tongue, in cases of huge malformations, surgery could be the mainstay treatment and provided that critical care measures are taken in to account, could be performed very safely.", "title": "" }, { "docid": "166bb3d8e2cd538e694f1b90054a5e97", "text": "Recently, low-shot learning has been proposed for handling the lack of training data in machine learning. Despite of the importance of this issue, relatively less efforts have been made to study this problem. In this paper, we aim to increase the size of training dataset in various ways to improve the accuracy and robustness of face recognition. In detail, we adapt a generator from the Generative Adversarial Network (GAN) to increase the size of training dataset, which includes a base set, a widely available dataset, and a novel set, a given limited dataset, while adopting transfer learning as a backend. Based on extensive experimental study, we conduct the analysis on various data augmentation methods, observing how each affects the identification accuracy. Finally, we conclude that the proposed algorithm for generating faces is effective in improving the identification accuracy and coverage at the precision of 99% using both the base and novel set.", "title": "" } ]
scidocsrr
c935c17cbf376a9c344e6c71deade676
Robustness of Federated Averaging for Non-IID Data
[ { "docid": "244b583ff4ac48127edfce77bc39e768", "text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.", "title": "" } ]
[ { "docid": "93dd7ecb1707f7b404e79d79dac0a7ba", "text": "Information quality has received great attention from both academics and practitioners since it plays an important role in decision-making process. The need of high information quality in organization is increase in order to reach business excellent. Total Information Quality Management (TIQM) offers solution to solve information quality problems through a method for building an effective information quality management (IQM) with continuous improvement in whole process. However, TIQM does not have a standard measure in determining the process maturity level. Thus causes TIQM process maturity level cannot be determined exactly so that the assessment and improvement process will be difficult to be done. The contribution of this research is the process maturity indicators and measures based on TIQM process and Capability Maturity Model (CMM) concepts. It have been validated through an Expert Judgment using the Delphi method and implemented through a case study.", "title": "" }, { "docid": "c25144cf41462c58820fdcd3652e9fec", "text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.02.043 * Corresponding author. Tel.: +3", "title": "" }, { "docid": "6f7332494ffc384eaae308b2116cab6a", "text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.", "title": "" }, { "docid": "8af81ca6334ad51856ac523fecd65cc5", "text": "Few studies have examined how changes in materialism relate to changes in well-being; fewer have experimentally manipulated materialism to change wellbeing. Studies 1, 2, and 3 examined how changes in materialistic aspirations related to changes in well-being, using varying time frames (12 years, 2 years, and 6 months), samples (US young adults and Icelandic adults), and measures of materialism and well-being. Across all three studies, results supported the hypothesis that people’s well-being improves as they place relatively less importance on materialistic goals and values, whereas orienting toward materialistic goals relatively more is associated with decreases in well-being over time. Study 2 additionally demonstrated that this association was mediated by changes in psychological need satisfaction. A fourth, experimental study showed that highly materialistic US adolescents who received an intervention that decreased materialism also experienced increases in self-esteem over the next several months, relative to a control group. Thus, well-being changes as people change their relative focus on materialistic goals.", "title": "" }, { "docid": "537793712e4e62d66e35b3c9114706f2", "text": "Database indices provide a non-discriminative navigational infrastructure to localize tuples of interest. Their maintenance cost is taken during database updates. In this work we study the complementary approach, addressing index maintenance as part of query processing using continuous physical reorganization, i.e., cracking the database into manageable pieces. Each query is interpreted not only as a request for a particular result set, but also as an advice to crack the physical database store into smaller pieces. Each piece is described by a query, all of which are assembled in a cracker index to speedup future search. The cracker index replaces the non-discriminative indices (e.g., B-trees and hash tables) with a discriminative index. Only database portions of past interest are easily localized. The remainder is unexplored territory and remains non-indexed until a query becomes interested. The cracker index is fully self-organized and adapts to changing query workloads. With cracking, the way data is physically stored self-organizes according to query workload. Even with a huge data set, only tuples of interest are touched, leading to significant gains in query performance. In case the focus shifts to a different part of the data, the cracker index will automatically adjust to that. We report on our design and implementation of cracking in the context of a full fledged relational system. It led to a limited enhancement to its relational algebra kernel, such that cracking could be piggy-backed without incurring too much processing overhead. Furthermore, we illustrate the ripple effect of dynamic reorganization on the query plans derived by the SQL optimizer. The experiences and results obtained are indicative of a significant reduction in system complexity with clear performance benefits. ∗Stratos Idreos is the contact author (Stratos.Idreos@cwi.nl) and a Ph.D student at CWI", "title": "" }, { "docid": "b4880ddb59730f465f585f3686d1d2b1", "text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.", "title": "" }, { "docid": "4d7e876d61060061ba6419869d00675e", "text": "Context-aware recommender systems (CARS) take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to contextaware recommendation than modeling contextual rating deviations.", "title": "" }, { "docid": "077e4307caf9ac3c1f9185f0eaf58524", "text": "Many text mining tools cannot be applied directly to documents available on web pages. There are tools for fetching and preprocessing of textual data, but combining them in one working tool chain can be time consuming. The preprocessing task is even more labor-intensive if documents are located on multiple remote sources with different storage formats. In this paper we propose the simplification of data preparation process for cases when data come from wide range of web resources. We developed an open-sourced tool, called Kayur, that greatly minimizes time and effort required for routine data preprocessing steps, allowing to quickly proceed to the main task of data analysis. The datasets generated by the tool are ready to be loaded into a data mining workbench, such as WEKA or Carrot2, to perform classification, feature prediction, and other data mining tasks.", "title": "" }, { "docid": "eaec7fb5490ccabd52ef7b4b5abd25f6", "text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.", "title": "" }, { "docid": "ed3044439e2ca81cbe57a6d4d2e7707a", "text": "ness. Second, every attribute specified for a concept is shared by more than one instance of the concept. Thus, the information contained in a concept is an abstraction across instances of the concept. The overlapping networks of shared attributes thus formed hold conceptual categories together. In this respect, the family resemblance view is like the classical view: Both maintain that the instances of a concept cohere because they are similar to one another by virtueof sharing certain attributes. Weighted attributes. An object that shares attributes with many members of a category bears greater family resemblance to that category than an object that shares attributes with few members. This suggests that attributes that are shared by many members confer a greater degree of family resemblance than those that are shared by a few. A third characteristic of the family resemblance view is that it assumes that concept attributes are \"weighted\" according to their relevance for conferring family resemblance to the category. In general, that relevance is taken to be a function of the number of category instances (and perhaps noninstances) that share the attribute. Presumably, if the combined relevance weights of the attributes of some novel object exceed a certain level (what might be called the membership threshold or criterion), that object will be 2 Here and throughout, I use relevance to include both relevance and salience as used by Ortony, Vondruska, Foss, and Jones (1985). 504 LLOYD K. KOMATSU considered an instance of the category (Medin, 1983; Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). The greater the degree to which the combined relevance weights exceed the threshold, the more typical an instance it is (see also Shafir, Smith, & Osherson, 1990). By this measure, an object must have a large number of heavily weighted attributes to be judged highly typical of a given category. Because such heavily weighted attributes are probably shared by many category instances and relatively few noninstances, an object highly typical of a category is likely to lie near the central tendencies of the category (see Retention of Central Tendencies, below), and is not likely to be typical of or lie near the central tendencies of any other category. Independence and additive combination of weights: Linear separability. Attribute weights can be combined using a variety of methods (cf. Medin & Schaffer, 1978; Reed, 1972). In the method typically associated with the family resemblance view (adapted from Tversky's, 1977, contrast model of similarity), attribute weights are assumed to be independent and combined by adding (Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). This leads to a fourth characteristic of the (modal) family resemblance view: It predicts that instances and noninstances of a concept can be perfectly partitioned by a linear discriminant function (i.e., if one was to plot a set of objects by the combined weights of their attributes, all instances would fall to one side of a line, and all noninstances would fall on the other side; Medin & SchafFer, 1978; Medin & Schwanenflugel, 1981; Nakamura, 1985; Wattenmaker, Dewey, Murphy, & Medin, 1986). Thus the (modal) family resemblance view predicts that concepts are \"linearly separable.\" Retention of central tendencies. The phrase family resemblance is used in two ways. In the sense that I have focused on until now, the family resemblance of an object to a category increases as the similarity between that object and all other members of the category increases and the similarity between that object and all nonmembers of the category decreases. This use of family resemblance (probably the use more reflective of Wittgenstein's, 1953, original ideas) has an extensional emphasis: It describes a relationship among objects and makes no assumptions about how the category of objects is represented mentally (i.e., about the intension of the word or what I have been calling the concept). In the second sense, family resemblance increases as the similarity between an object and the central tendencies of the category increases (Hampton, 1979). This use of family resemblance has an intentional emphasis: It describes a relationship between objects and a mental representation (of the central tendencies of a category). Although these two ways of thinking about family resemblance, average similarity to all instances and similarity to a central tendency, are different (cf. Reed, 1972), Barsalou (1985, 1987) points out that they typically yield roughly the same outcome, much as the average difference between a number and a set of other numbers is roughly the same as the difference between that number and the average of that set of other numbers. (For example, consider the number 2 and the set of numbers 3, 5, and 8. The average difference between 2 and 3, 5, and 8 is 3.33, and the difference between 2 and the average of 3,5, and 8 is 3.33.) Barsalou argues that although for most purposes the two ways of thinking about family resemblance are equivalent (one of the reasons the exemplar and family resemblance views are often difficult to distinguish empirically; see below), computation in terms of central tendencies may be more plausible psychologically (because fewer comparisons are involved in comparing an object with the central tendencies of a concept than with every instance and noninstance of the concept; see also Barresi, Robbins, & Shain, 1975). This suggests a fifth characteristic of the family resemblance view: A concept provides a summary of a category in terms of the central tendencies of the members of that category rather than in terms of the representations of individual instances. Economy, Informativeness, Coherence, and Naturalness Both the classical and the family resemblance views explain conceptual coherence in terms of the attributes shared by the members of a category (i.e., the similarity among the instances of a concept). The critical difference between the two views lies in the constraints placed on the attributes shared. In the classical view, all instances are similar in that they share a set of necessary and sufficient attributes (i.e., the definition). The family resemblance view relaxes this constraint and requires only that every attribute specified by the concept be shared by more than one instance. Although this requirement confers a certain amount of economy to the family resemblance view (every piece of information applies to several instances), removing the definitional constraint allows family resemblance representations to include nondefinitional information. In particular, concepts are likely to specify information beyond that true of all instances or beyond that strictly needed to understand what Medin and Smith (1984) call linguistic meaning (the different kinds of relations that hold among words such as synonymy, antynomy, hyponomy, anomaly, and contradiction as usually understood; cf. Katz, 1972; Katz & Fodor, 1963) to include information about how the objects referred to may relate to one another and to the world. It is not clear whether this loss in economy results in a concomitant increase in informativeness: Although in the family resemblance view more information may be associated with a concept than in the classical, not all of that information applies to every instance of the concept. In the family resemblance view, attributes can be inferred to inhere in different instances only with some level of probability. Thus the informativeness of the individual attributes specified is somewhat compromised. With no a priori constraint on the nature (or level) of similar3 There are several different ways to approach the representation of the central tendencies of a category. E. E. Smith and Medin (1981), for example, identified three approaches to what they called the probablistic view: the featural, the dimensional, and the holistic. E. E. Smith and Medin provided ample evidence for rejecting the holistic approach on both empirical and theoretical grounds (see also McNamara & Miller, 1989). They also argued that the similarities between the featural and dimensional approaches suggest that they might profitably be combined into a single position that could be called the \"component\" approach (E. E. Smith & Medin, 1981, p. 164) and concluded that the component approach is the only viable variant. RECENT VIEWS OF CONCEPTS 505 ity shared by the instances of a concept, the family resemblance view has difficulty specifying which similarities count and which do not when it comes to setting the boundaries between concepts. A Great Dane and a Bedlington terrier appear to share few similarities, but they share enough so that both are dogs. But a Bedlington terrier seems to share as many similarities with a lamb as it does with a Great Dane. Why is a Bedlington terrier a dog and not a lamb? Presumably, the family resemblance view would predict that the summed weights of Bedlington terrier attributes lead to its being more similar to other dogs than to lambs and result in its being categorized as a dog rather than a lamb. But to determine those weights, we need to know how common those attributes are among dogs and lambs. This implies that the categorization of Bedlington terriers must be preceded by the partitioning of the world into dog and lamb. Without that prior partitioning, the dog versus lamb weights of Bedlington terrier attributes cannot be determined. To answer the question of what privileges the categorization of a Bedlington terrier with the Great Dane rather than the lamb requires answering what privileges the partitioning of the world into dogs and lambs. Rosch (Rosch, 1978; Rosch & Mervis, 1975) argues that certain partitionings of the world (including, presumably, into dogs and lambs) are privileged, more immediate or direct, and arise naturally from the interaction of our perceptual apparatus and the environment. Thus whereas the classical view", "title": "" }, { "docid": "4f0b28ded91c48913a13bde141a3637f", "text": "This paper presents our work in mapping the design space of techniques for temporal graph visualisation. We identify two independent dimensions upon which the techniques can be classified: graph structural encoding and temporal encoding. Based on these dimensions, we create a matrix into which we organise existing techniques. We identify gaps in this design space which may prove interesting opportunities for the development of novel techniques. We also consider additional dimensions upon which further useful classification could be made. In organising the disparate existing approaches from a wide range of domains, our classification will assist those new to the research area, and designers and evaluators developing systems for temporal graph data by raising awareness of the range of possible approaches available, and highlighting possible directions for further research.", "title": "" }, { "docid": "a6fbd3f79105fd5c9edfc4a0292a3729", "text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.", "title": "" }, { "docid": "7120d5acf58f8ec623d65b4f41bef97d", "text": "BACKGROUND\nThis study analyzes the problems and consequences associated with prolonged use of laparoscopic instruments (dissector and needle holder) and equipments.\n\n\nMETHODS\nA total of 390 questionnaires were sent to the laparoscopic surgeons of the Spanish Health System. Questions were structured on the basis of 4 categories: demographics, assessment of laparoscopic dissector, assessment of needle holder, and other informations.\n\n\nRESULTS\nA response rate of 30.26% was obtained. Among them, handle shape of laparoscopic instruments was identified as the main element that needed to be improved. Furthermore, the type of instrument, electrocautery pedals and height of the operating table were identified as major causes of forced positions during the use of both surgical instruments.\n\n\nCONCLUSIONS\nAs far as we know, this is the largest Spanish survey conducted on this topic. From this survey, some ergonomic drawbacks have been identified in: (a) the instruments' design, (b) the operating tables, and (c) the posture of the surgeons.", "title": "" }, { "docid": "2ad8723c9fce1a6264672f41824963f8", "text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.", "title": "" }, { "docid": "b2d256cd40e67e3eadd3f5d613ad32fa", "text": "Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing.", "title": "" }, { "docid": "753eb03a060a5e5999eee478d6d164f9", "text": "Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.", "title": "" }, { "docid": "c5118bfd338ed2879477023b69fff911", "text": "The paper describes a study and an experimental verification of remedial strategies against failures occurring in the inverter power devices of a permanent-magnet synchronous motor drive. The basic idea of this design consists in incorporating a fourth inverter pole, with the same topology and capabilities of the other conventional three poles. This minimal redundant hardware, appropriately connected and controlled, allows the drive to face a variety of power device fault conditions while maintaining a smooth torque production. The achieved results also show the industrial feasibility of the proposed fault-tolerant control, that could fit many practical applications.", "title": "" }, { "docid": "98b78340925729e580f888f9ab2d8453", "text": "This paper describes the Jensen-Shannon divergence (JSD) and Hilbert space embedding. With natural definitions making these considerations precise, one finds that the general Jensen-Shannon divergence related to the mixture is the minimum redundancy, which can be achieved by the observer. The set of distributions with the metric /spl radic/JSD can even be embedded isometrically into Hilbert space and the embedding can be identified.", "title": "" }, { "docid": "e2630765e2fa4b203a4250cb5b69b9e9", "text": "Thirteen years have passed since Karl Sims published his work onevolving virtual creatures. Since then,several novel approaches toneural network evolution and genetic algorithms have been proposed.The aim of our work is to apply recent results in these areas tothe virtual creatures proposed by Karl Sims, leading to creaturescapable of solving more complex tasks. This paper presents oursuccess in reaching the first milestone -a new and completeimplementation of the original virtual creatures. All morphologicaland control properties of the original creatures were implemented.Laws of physics are simulated using ODE library. Distributedcomputation is used for CPU-intensive tasks, such as fitnessevaluation.Experiments have shown that our system is capable ofevolving both morphology and control of the creatures resulting ina variety of non-trivial swimming and walking strategies.", "title": "" }, { "docid": "7892a17a84d54bb6975cb7b8229242a9", "text": "The way people conceptualize space is an important consideration for the design of geographic information systems, because a better match with peopleÕs thinking is expected to lead to easier-touse information systems. Everyday space, the basis to geographic information systems (GISs), has been characterized in the literature as being either small-scale (from table-top to room-size spaces) or large-scale (inside-of-building spaces to city-size space). While this dichotomy of space is grounded in the view from psychology that peopleÕs perception of space, spatial cognition, and spatial behavior are experience-based, it is in contrast to current GISs, which enable us to interact with large-scale spaces as though they were small-scale or manipulable. We analyze different approaches to characterizing spaces and propose a unified view in which space is based on the physical properties of manipulability, locomotion, and size of space. Within the structure of our framework, we distinguish six types of spaces: manipulable object space (smaller than the human body), non-manipulable object space (greater than the human body, but less than the size of a building), environmental space (from inside building spaces to city-size spaces), geographic space (state, country, and continent-size spaces), panoramic space (spaces perceived via scanning the landscape), and map space. Such a categorization is an important part of Naive Geography, a set of theories of how people intuitively or spontaneously conceptualize geographic space and time, because it has implications for various theoretical and methodological questions concerning the design and use of spatial information tools. Of particular concern is the design of effective spatial information tools that lead to better communication.", "title": "" } ]
scidocsrr
b6a058c977a98d7999ccfd7681813218
3D Segmentation with Exponential Logarithmic Loss for Highly Unbalanced Object Sizes
[ { "docid": "7730b770c0be4a86a926cbae902c1416", "text": "In this paper, we propose an end-to-end trainable Convolutional Neural Network (CNN) architecture called the M-net, for segmenting deep (human) brain structures from Magnetic Resonance Images (MRI). A novel scheme is used to learn to combine and represent 3D context information of a given slice in a 2D slice. Consequently, the M-net utilizes only 2D convolution though it operates on 3D data, which makes M-net memory efficient. The segmentation method is evaluated on two publicly available datasets and is compared against publicly available model based segmentation algorithms as well as other classification based algorithms such as Random Forrest and 2D CNN based approaches. Experiment results show that the M-net outperforms all these methods in terms of dice coefficient and is at least 3 times faster than other methods in segmenting a new volume which is attractive for clinical use.", "title": "" }, { "docid": "76ad212ccd103c93d45c1ffa0e208b45", "text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "title": "" } ]
[ { "docid": "60710c3339b68892f39c3421737006c0", "text": "The impact of television (TV) advertisements (commercials) on children's eating behaviour and health is of critical interest. In a preliminary study we examined lean, over weight and obese children's ability to recognise eight food and eight non-food related adverts in a repeated measures design. Their consumption of sweet and savoury, high and low fat snack foods were measured after both sessions. Whilst there was no significant difference in the number of non-food adverts recognised between the lean and obese children, the obese children did recognise significantly more of the food adverts. The ability to recognise the food adverts significantly correlated with the amount of food eaten after exposure to them. The overall snack food intake of the obese and overweight children was significantly higher than the lean children in the control (non-food advert) condition. The consumption of all the food offered increased post food advert with the exception of the low-fat savoury snack. These data demonstrate obese children's heightened alertness to food related cues. Moreover, exposure to such cues induce increased food intake in all children. As suggested the relationship between TV viewing and childhood obesity appears not merely a matter of excessive sedentary activity. Exposure to food adverts promotes consumption.", "title": "" }, { "docid": "742dbd75ad995d5c51c4cbce0cc7f8cc", "text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.", "title": "" }, { "docid": "e8157c3ae710bda46da6928e736ffc35", "text": "Music has been an inherent part of human life when it comes to recreation; entertainment and much recently, even as a therapeutic medium. The way music is composed, played and listened to has witnessed an enormous transition from the age of magnetic tape recorders to the recent age of digital music players streaming music from the cloud. What has remained intact is the special relation that music shares with human emotions. We most often choose to listen to a song or music which best fits our mood at that instant. In spite of this strong correlation, most of the music softwares present today are still devoid of providing the facility of mood-aware play-list generation. This increases the time music listeners take in manually choosing a list of songs suiting a particular mood or occasion, which can be avoided by annotating songs with the relevant emotion category they convey. The problem, however, lies in the overhead of manual annotation of music with its corresponding mood and the challenge is to identify this aspect automatically and intelligently. The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques contributing considerably to analyze and identify the relation of mood with music. We take the same inspiration forward and contribute by making an effort to build a system for automatic identification of mood underlying the audio songs by mining their spectral, temporal audio features. Our focus is specifically on Indian Popular Hindi songs. We have analyzed various data classification algorithms in order to learn, train and test the model representing the moods of these audio songs and developed an open source framework for the same. We have been successful to achieve a satisfactory precision of 70% to 75% in identifying the mood underlying the Indian popular music by introducing the bagging (ensemble) of random forest approach experimented over a list of 4600 audio clips.", "title": "" }, { "docid": "d131f4f22826a2083d35dfa96bf2206b", "text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.", "title": "" }, { "docid": "ebb27d659246af9010248371fa22e733", "text": "Business Intelligence (BI) solutions require the design and implementation of complex processes (denoted ETL) that extract, transform, and load data from the sources to a common repository. New applications, like for example, real-time data warehousing, require agile and flexible tools that allow BI users to take timely decisions based on extremely up-to-date data. This calls for new ETL tools able to adapt to constant changes and quickly produce and modify executable code. A way to achieve this is to make ETL processes become aware of the business processes in the organization, in order to easily identify which data are required, and when and how to load them in the data warehouse. Therefore, we propose to model ETL processes using the standard representation mechanism denoted BPMN (Business Process Modeling and Notation). In this paper we present a BPMN-based metamodel for conceptual modeling of ETL processes. This metamodel is based on a classification of ETL objects resulting from a study of the most used commercial and open source ETL tools.", "title": "" }, { "docid": "2281d739c6858d35eb5f3650d2d03474", "text": "We discuss an implementation of the RRT* optimal motion planning algorithm for the half-car dynamical model to enable autonomous high-speed driving. To develop fast solutions of the associated local steering problem, we observe that the motion of a special point (namely, the front center of oscillation) can be modeled as a double integrator augmented with fictitious inputs. We first map the constraints on tire friction forces to constraints on these augmented inputs, which provides instantaneous, state-dependent bounds on the curvature of geometric paths feasibly traversable by the front center of oscillation. Next, we map the vehicle's actual inputs to the augmented inputs. The local steering problem for the half-car dynamical model can then be transformed to a simpler steering problem for the front center of oscillation, which we solve efficiently by first constructing a curvature-bounded geometric path and then imposing a suitable speed profile on this geometric path. Finally, we demonstrate the efficacy of the proposed motion planner via numerical simulation results.", "title": "" }, { "docid": "e4b6aaa3f2548fa0f59973c317298f5e", "text": "Perceived discrimination has been studied with regard to its impact on several types of health effects. This meta-analysis provides a comprehensive account of the relationships between multiple forms of perceived discrimination and both mental and physical health outcomes. In addition, this meta-analysis examines potential mechanisms by which perceiving discrimination may affect health, including through psychological and physiological stress responses and health behaviors. Analysis of 134 samples suggests that when weighting each study's contribution by sample size, perceived discrimination has a significant negative effect on both mental and physical health. Perceived discrimination also produces significantly heightened stress responses and is related to participation in unhealthy and nonparticipation in healthy behaviors. These findings suggest potential pathways linking perceived discrimination to negative health outcomes.", "title": "" }, { "docid": "1cefbe0177c56d92e34c4b5a88a29099", "text": "Typical tasks of future service robots involve grasping and manipulating a large variety of objects differing in size and shape. Generating stable grasps on 3D objects is considered to be a hard problem, since many parameters such as hand kinematics, object geometry, material properties and forces have to be taken into account. This results in a high-dimensional space of possible grasps that cannot be searched exhaustively. We believe that the key to find stable grasps in an efficient manner is to use a special representation of the object geometry that can be easily analyzed. In this paper, we present a novel grasp planning method that evaluates local symmetry properties of objects to generate only candidate grasps that are likely to be of good quality. We achieve this by computing the medial axis which represents a 3D object as a union of balls. We analyze the symmetry information contained in the medial axis and use a set of heuristics to generate geometrically and kinematically reasonable candidate grasps. These candidate grasps are tested for force-closure. We present the algorithm and show experimental results on various object models using an anthropomorphic hand of a humanoid robot in simulation.", "title": "" }, { "docid": "1fcc1acdd4b7b170693af3d7da40f7f4", "text": "The intended purpose of this monograph is to provide a general overview of allergy diagnostics for health care professionals who care for patients with allergic disease. For a more comprehensive review of allergy diagnostic testing, readers can refer to the Allergy Diagnostic Practice Parameters. A key message is that a positive allergy test result (skin or blood) indicates only the presence of allergen specific IgE (called sensitization). It does not necessarily mean clinical allergy (ie, allergic symptoms with exposure). It is important for this reason that the allergy evaluation be based on the patient's history and directed by a health care professional with sufficient understanding of allergy diagnostic testing to use the information obtained from his/her evaluation of the patient to determine (1) what allergy diagnostic tests to order, (2) how to interpret the allergy diagnostic test results, and (3) how to use the information obtained from the allergy evaluation to develop an appropriate therapeutic treatment plan.", "title": "" }, { "docid": "1e69c1aef1b194a27d150e45607abd5a", "text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.", "title": "" }, { "docid": "50de4c8fec2194dbaccd7de460e95b5e", "text": "This paper presents a novel hardware architecture using FPGA-based reconfigurable computing (RC) for accurate calculation of dense disparity maps in real-time, stereo-vision systems. Recent stereo-vision hardware solutions have proposed local-area approaches. Although parallelism can be easily exploited using local methods by replicating the window-based image elaborations, accuracy is limited because the disparity result is optimized by locally searching for the minimum value of a cost function. Global methods improve the quality of the stereo-vision disparity maps at the expense of increasing computational complexity, thus making real-time application not viable for conventional computing. This problem becomes even more evident when stereo vision is a single step integrated into a more complete image elaboration flow, where the depth maps are used for further detection, recognition, stereo reconstruction, or 3D enhancement processing. Our approach exploits a parallel and fully pipelined architecture to implement a global method for the calculation of dense disparity maps based on the dynamic programming optimization of the Hamming distance of the Census-transform cost function. The resulting stereovision core produces results that are significantly more accurate than existing hardware solutions using FPGAs that are based upon local approaches. The design was implemented and evaluated on an Altera Stratix-III E260 FPGA in a GiDEL PROCStar-III board. Tests were performed on 640×480 stereo images, with a Census transform window size = 3, correlation window size = 5, and disparity ranges of 30 and 50. Our hardware architecture achieved a speedup of about 319 and 512 respectively for the two disparity ranges, when compared to an optimized C++ implementation executed on a 2.26 GHz Xeon E5520 core. High accuracy in the output disparity map, together with high performance in terms of frames per second, make the proposed architecture an ideal solution for 3D robot-assisted medical systems, tracking, and autonomous navigation systems, where accuracy and speed constraints are very stringent.", "title": "" }, { "docid": "e89acdeb493d156390851a2a57231baf", "text": "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1", "title": "" }, { "docid": "0860b29f52d403a0ff728a3e356ec071", "text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.", "title": "" }, { "docid": "4306cc9072c5b53f6fc7b79574dac117", "text": "It is popular to use real-world data to evaluate data mining techniques. However, there are some disadvantages to use real-world data for such purposes. Firstly, real-world data in most domains is difficult to obtain for several reasons, such as budget, technical or ethical. Secondly, the use of many of the real-world data is restricted, those data sets do either not contain specific patterns that are easy to mine or the data needs special preparation and the algorithm needs very specific settings in order to find patterns in it. The solution to this could be the generation of synthetic, \"meaningful data\" (data with intrinsic patterns). This paper presents a novel approach for generating synthetic data by developing a tool, including novel algorithms for specific data mining patterns, and a user-friendly interface, which is able to create large data sets with predefined classification rules, multilinear regression patterns. A preliminary run of the prototype proves that the generation of large amounts of such \"meaningful data\" is possible. Also the proposed approach could be extended to a further development for generating synthetic data with other intrinsic patterns.", "title": "" }, { "docid": "fd0c32b1b4e52f397d0adee5de7e381c", "text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.", "title": "" }, { "docid": "de7331c328ba54b7ddd8a542aec3b19f", "text": "Predicting the next location a user tends to visit is an important task for applications like location-based advertising, traffic planning, and tour recommendation. We consider the next location prediction problem for semantic trajectory data, wherein each GPS record is attached with a text message that describes the user's activity. In semantic trajectories, the confluence of spatiotemporal transitions and textual messages indicates user intents at a fine granularity and has great potential in improving location prediction accuracies. Nevertheless, existing methods designed for GPS trajectories fall short in capturing latent user intents for such semantics-enriched trajectory data. We propose a method named semantics-enriched recurrent model (SERM). SERM jointly learns the embeddings of multiple factors (user, location, time, keyword) and the transition parameters of a recurrent neural network in a unified framework. Therefore, it effectively captures semantics-aware spatiotemporal transition regularities to improve location prediction accuracies. Our experiments on two real-life semantic trajectory datasets show that SERM achieves significant improvements over state-of-the-art methods.", "title": "" }, { "docid": "473968c14db4b189af126936fd5486ca", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.", "title": "" }, { "docid": "1d12470ab31735721a1f50ac48ac65bd", "text": "In this work, we investigate the role of relational bonds in keeping students engaged in online courses. Specifically, we quantify the manner in which students who demonstrate similar behavior patterns influence each other’s commitment to the course through their interaction with them either explicitly or implicitly. To this end, we design five alternative operationalizations of relationship bonds, which together allow us to infer a scaled measure of relationship between pairs of students. Using this, we construct three variables, namely number of significant bonds, number of significant bonds with people who have dropped out in the previous week, and number of such bonds with people who have dropped in the current week. Using a survival analysis, we are able to measure the prediction strength of these variables with respect to dropout at each time point. Results indicate that higher numbers of significant bonds predicts lower rates of dropout; while loss of significant bonds is associated with higher rates of dropout.", "title": "" }, { "docid": "7342475811dd69ef812e2b2f91c283ba", "text": "Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion/visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation—the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Then the evidence of co-existing pedestrians is used for improving the single pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the Caltech-Train dataset and the ETH dataset. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. The mutual visibility deep model leads to 6–15 % improvements on multiple benchmark datasets.", "title": "" }, { "docid": "b5831795da97befd3241b9d7d085a20f", "text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.", "title": "" } ]
scidocsrr
d07943ef2af3f48482c50d0e507a2967
Introducing the Webb Spam Corpus: Using Email Spam to Identify Web Spam Automatically
[ { "docid": "4d56abf003caaa11e5bef74a14bd44e0", "text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.", "title": "" } ]
[ { "docid": "396dd0517369d892d249bb64fa410128", "text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148.  Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.", "title": "" }, { "docid": "615d2f03b2ff975242e90103e98d70d3", "text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.", "title": "" }, { "docid": "d93bc6fa3822dac43949d72a82e5c047", "text": "In breast cancer, gene expression analyses have defined five tumor subtypes (luminal A, luminal B, HER2-enriched, basal-like and claudin-low), each of which has unique biologic and prognostic features. Here, we comprehensively characterize the recently identified claudin-low tumor subtype. The clinical, pathological and biological features of claudin-low tumors were compared to the other tumor subtypes using an updated human tumor database and multiple independent data sets. These main features of claudin-low tumors were also evaluated in a panel of breast cancer cell lines and genetically engineered mouse models. Claudin-low tumors are characterized by the low to absent expression of luminal differentiation markers, high enrichment for epithelial-to-mesenchymal transition markers, immune response genes and cancer stem cell-like features. Clinically, the majority of claudin-low tumors are poor prognosis estrogen receptor (ER)-negative, progesterone receptor (PR)-negative, and epidermal growth factor receptor 2 (HER2)-negative (triple negative) invasive ductal carcinomas with a high frequency of metaplastic and medullary differentiation. They also have a response rate to standard preoperative chemotherapy that is intermediate between that of basal-like and luminal tumors. Interestingly, we show that a group of highly utilized breast cancer cell lines, and several genetically engineered mouse models, express the claudin-low phenotype. Finally, we confirm that a prognostically relevant differentiation hierarchy exists across all breast cancers in which the claudin-low subtype most closely resembles the mammary epithelial stem cell. These results should help to improve our understanding of the biologic heterogeneity of breast cancer and provide tools for the further evaluation of the unique biology of claudin-low tumors and cell lines.", "title": "" }, { "docid": "e54a0387984553346cf718a6fbe72452", "text": "Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.", "title": "" }, { "docid": "cebd2d1ae41ea1179256b885cbd13d3d", "text": "The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix $$\\ell _1$$ ℓ 1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.", "title": "" }, { "docid": "20acbae6f76e3591c8b696481baffc90", "text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.", "title": "" }, { "docid": "152e8e88e8f560737ec0c20ae9aa0335", "text": "UNLABELLED\nDysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance.\n\n\nKEY PRACTITIONER MESSAGE\nThe addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance.", "title": "" }, { "docid": "470db66b9bcff16a9a559810ce352dfa", "text": "Abstract The state of security on the Internet is poor and progress toward increased protection is slow. This has given rise to a class of action referred to as “Ethical Hacking”. Companies are releasing software with little or no testing and no formal verification and expecting consumers to debug their product for them. For dot.com companies time-to-market is vital, security is not perceived as a marketing advantage, and implementing a secure design process an expensive sunk expense such that there is no economic incentive to produce bug-free software. There are even legislative initiatives to release software manufacturers from legal responsibility to their defective software.", "title": "" }, { "docid": "90e7d54c908b308e6236846d99888792", "text": "Vehicles now include Electronic Control Units (ECUs) that communicate with each other via broadcast networks. Cyber-security professionals have shown that such embedded communication networks can be compromised. Very recently, it has been shown that embedded devices connected to commercial vehicle networks can be manipulated to perform unintended actions by injecting spoofed messages. Such attacks can be hard to detect as they can mimic safety critical actions performed by ECUs. We present a precedence graph-based anomaly detection technique to detect malicious message injections. Our approach can detect malicious message injections and is able to distinguish them from safety critical actions like hard braking.", "title": "" }, { "docid": "cf9fe52efd734c536d0a7daaf59a9bcd", "text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.", "title": "" }, { "docid": "dc3417d01a998ee476aeafc0e9d11c74", "text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).", "title": "" }, { "docid": "346ee5be7c74b28f7090c909861d66ac", "text": "This paper introduces a new framework to construct fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we prerandomize the sensing signal by scrambling its sample locations or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the resulting transform coefficients to obtain the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable to that of completely random sensing matrices. Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework.", "title": "" }, { "docid": "e440ad1afbbfbf5845724fd301051d92", "text": "The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of highand low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing. TYPE OF PAPER AND", "title": "" }, { "docid": "d7a9465ac031cf7be6f3e74276805f0f", "text": "Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.", "title": "" }, { "docid": "498eada57edb9120da164c5cb396198b", "text": "We propose a passive blackbox-based technique for determining the type of access point (AP) connected to a network. Essentially, a stimulant (i.e., packet train) that emulates normal data transmission is sent through the access point. Since access points from different vendors are architecturally heterogeneous (e.g., chipset, firmware, driver), each AP will act upon the packet train differently. By applying wavelet analysis to the resultant packet train, a distinct but reproducible pattern is extracted allowing a clear classification of different AP types. This has two important applications: (1) as a system administrator, this technique can be used to determine if a rogue access point has connected to the network; and (2) as an attacker, fingerprinting the access point is necessary to launch driver/firmware specific attacks. Extensive experiments were conducted (over 60GB of data was collected) to differentiate 6 APs. We show that this technique can classify APs with a high accuracy (in some cases, we can classify successfully 100% of the time) with as little as 100000 packets. Further, we illustrate that this technique is independent of the stimulant traffic type (e.g., TCP or UDP). Finally, we show that the AP profile is stable across multiple models of the same AP.", "title": "" }, { "docid": "c8233fcbc4d07dbd076a4d7a4fdf3b0c", "text": "A 15-b l-Msample/s digitally self-calibrated pipeline analog-to-digital converter (ADC) is presented. A radix 1.93, 1 b per stage design is employed. The digital self-calibration accounts for capacitor mismatch, comparator offset, charge injection, finite op-amp gain, and capacitor nonlinearity contributing to DNL. A THD of –90 dB was measured with a 9.8756-kHz sine-wave input. The DNL was measured to be within +0.25 LSB at 15 b, and the INL was measured to be within +1.25 LSB at 15 b. The die area is 9.3 mm x 8.3 mm and operates on +4-V power supply with 1.8-W power dissipation. The ADC is fabricated in an 11-V, 4-GHz, 2.4-pm BiCMOS process.", "title": "" }, { "docid": "a5052a27ebbfb07b02fa18b3d6bff6fc", "text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.", "title": "" }, { "docid": "1b7a8725023d20e36ef929b427db51e5", "text": "Electronic Customer Relationship Management (eCRM) has become the latest paradigm in the world of Customer Relationship Management. Recent business surveys suggest that up to 50% of such implementations do not yield measurable returns on investment. A secondary analysis of 13 case studies suggests that many of these limited success implementations can be attributed to usability and resistance factors. The objective of this paper is to review the general usability and resistance principles in order build an integrative framework for analyzing eCRM case studies. The conclusions suggest that if organizations want to get the most from their eCRM implementations they need to revisit the general principles of usability and resistance and apply them.", "title": "" }, { "docid": "9c1d8f50bd46f7c7b6e98c3c61edc67d", "text": "This paper presents the implementation of a complete fingerprint biometric cryptosystem in a Field Programmable Gate Array (FPGA). This is possible thanks to the use of a novel fingerprint feature, named QFingerMap, which is binary, length-fixed, and ordered. Security of Authentication on FPGA is further improved because information stored is protected due to the design of a cryptosystem based on Fuzzy Commitment. Several samples of fingers as well as passwords can be fused at feature level with codewords of an error correcting code to generate non-sensitive data. System performance is illustrated with experimental results corresponding to 560 fingerprints acquired in live by an optical sensor and processed by the system in a Xilinx Virtex 6 FPGA. Depending on the realization, more or less accuracy is obtained, being possible a perfect authentication (zero Equal Error Rate), with the advantages of real-time operation, low power consumption, and a very small device.", "title": "" }, { "docid": "935a576ef026c6891f9ba77ac6dc2507", "text": "This is Part II of two papers evaluating the feasibility of providing all energy for all purposes (electric power, transportation, and heating/cooling), everywhere in the world, from wind, water, and the sun (WWS). In Part I, we described the prominent renewable energy plans that have been proposed and discussed the characteristics of WWS energy systems, the global demand for and availability of WWS energy, quantities and areas required for WWS infrastructure, and supplies of critical materials. Here, we discuss methods of addressing the variability of WWS energy to ensure that power supply reliably matches demand (including interconnecting geographically dispersed resources, using hydroelectricity, using demand-response management, storing electric power on site, over-sizing peak generation capacity and producing hydrogen with the excess, storing electric power in vehicle batteries, and forecasting weather to project energy supplies), the economics of WWS generation and transmission, the economics of WWS use in transportation, and policy measures needed to enhance the viability of a WWS system. We find that the cost of energy in a 100% WWS will be similar to the cost today. We conclude that barriers to a 100% conversion to WWS power worldwide are primarily social and political, not technological or even economic. & 2010 Elsevier Ltd. All rights reserved. 1. Variability and reliability in a 100% WWS energy system in all regions of the world One of the major concerns with the use of energy supplies, such as wind, solar, and wave power, which produce variable output is whether such supplies can provide reliable sources of electric power second-by-second, daily, seasonally, and yearly. A new WWS energy infrastructure must be able to provide energy on demand at least as reliably as does the current infrastructure (e.g., De Carolis and Keith, 2005). In general, any electricity system must be able to respond to changes in demand over seconds, minutes, hours, seasons, and years, and must be able to accommodate unanticipated changes in the availability of generation. With the current system, electricity-system operators use ‘‘automatic generation control’’ (AGC) (or frequency regulation) to respond to variation on the order of seconds to a few minutes; spinning reserves to respond to variation on the order of minutes to an hour; and peak-power generation to respond to hourly variation (De Carolis and Keith, 2005; Kempton and Tomic, 2005a; Electric Power Research Institute, 1997). AGC and spinning reserves have very low ll rights reserved. Delucchi), cost, typically less than 10% of the total cost of electricity (Kempton and Tomic, 2005a), and are likely to remain this inexpensive even with large amounts of wind power (EnerNex, 2010; DeCesaro et al., 2009), but peak-power generation can be very expensive. The main challenge for the current electricity system is that electric power demand varies during the day and during the year, while most supply (coal, nuclear, and geothermal) is constant during the day, which means that there is a difference to be made up by peakand gap-filling resources such as natural gas and hydropower. Another challenge to the current system is that extreme events and unplanned maintenance can shut down plants unexpectedly. For example, unplanned maintenance can shut down coal plants, extreme heat waves can cause cooling water to warm sufficiently to shut down nuclear plants, supply disruptions can curtail the availability of natural gas, and droughts can reduce the availability of hydroelectricity. A WWS electricity system offers new challenges but also new opportunities with respect to reliably meeting energy demands. On the positive side, WWS technologies generally suffer less downtime than do current electric power technologies. For example, the average coal plant in the US from 2000 to 2004 was down 6.5% of the year for unscheduled maintenance and 6.0% of the year for scheduled maintenance (North American Electric Reliability Corporation, 2009a), but modern wind turbines have a down time of only 0–2% over land and 0–5% over the ocean (Dong Energy et al., M.A. Delucchi, M.Z. Jacobson / Energy Policy 39 (2011) 1170–119", "title": "" } ]
scidocsrr
40a1f02fd7c25fb9ef0a3d2e38136175
A brief introduction to the use of event-related potentials in studies of perception and attention.
[ { "docid": "3132ed8b0f2e257c3e9e8b0a716cd72c", "text": "Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pips in one ear and ignored concurrent tone pips in the other ear. The negative component of the evoked potential peaking at 80 to 110 milliseconds was substantially larger for the attended tones. This negative component indexed a stimulus set mode of selective attention toward the tone pips in one ear. A late positive component peaking at 250 to 400 milliseconds reflected the response set established to recognize infrequent, higher pitched tone pips in the attended series.", "title": "" } ]
[ { "docid": "f0db74061a2befca317f9333a0712ab9", "text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.", "title": "" }, { "docid": "fef45863bc531960dbf2a7783995bfdb", "text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.", "title": "" }, { "docid": "75ef838f680c322baa3fbfff96f30fe8", "text": "Designing distributed real-time systems as being composed of communicating objects offers many advantages with respect to modularity and extensibility of these systems. However, distributed real-time applications exhibit communication patterns that significantly differ from the traditional object invocation style. The publisher/subscriber model for inter-object communication matches well with these patterns. Any implementation of that model must address the problems of binding subscribers to publishers, of routing and filtering of messages, as well as reliability, efficiency and latency of message delivery. In the context of real-time applications, all these issues must be subject to a rigid inspection with respect to meeting real-time requirements. We argue that for embedded control systems built around smart microcontroller-powered devices these requirements can only be met when exploiting the properties of the underlying network. The CAN-Bus (CAN: Controller Area Network) which is an emerging standard in the field of real-time embedded systems is particularly suited to implement a publisher/subscriber model of communication. In this paper, we present an implementation of the real-time publisher/subscriber model that exploits the underlying facilities of the CANBus. In particular, we introduce a novel addressing scheme for publisher/subscriber communication that makes efficient use of the CAN-Bus addressing method. We provide a detailed design and implementation details along with some preliminary performance estimations.", "title": "" }, { "docid": "e7d955c48e5bdd86ae21a61fcd130ae2", "text": "We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.", "title": "" }, { "docid": "35d3dcb77620a69388e90318085c744d", "text": "2-D face recognition in the presence of large pose variations presents a significant challenge. When comparing a frontal image of a face to a near profile image, one must cope with large occlusions, non-linear correspondences, and significant changes in appearance due to viewpoint. Stereo matching has been used to handle these problems, but performance of this approach degrades with large pose changes. We show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods, which are needed to provide robustness to lighting change. We address this problem by designing a new, dynamic programming stereo algorithm that accounts for surface slant. We show that on the CMU PIE dataset this method results in significant improvements in recognition performance.", "title": "" }, { "docid": "fc5782aa3152ca914c6ca5cf1aef84eb", "text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.", "title": "" }, { "docid": "799be9729a01234c236431f5c754de8f", "text": "This meta-analytic review of 42 studies covering 8,009 participants (ages 4-20) examines the relation of moral emotion attributions to prosocial and antisocial behavior. A significant association is found between moral emotion attributions and prosocial and antisocial behaviors (d = .26, 95% CI [.15, .38]; d = .39, 95% CI [.29, .49]). Effect sizes differ considerably across studies and this heterogeneity is attributed to moderator variables. Specifically, effect sizes for predicted antisocial behavior are larger for self-attributed moral emotions than for emotions attributed to hypothetical story characters. Effect sizes for prosocial and antisocial behaviors are associated with several other study characteristics. Results are discussed with respect to the potential significance of moral emotion attributions for the social behavior of children and adolescents.", "title": "" }, { "docid": "307dac4f0cc964a539160780abb1c123", "text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.", "title": "" }, { "docid": "e896b306c5282da3b0fd58aaf635c027", "text": "In June 2011 the U.S. Supreme Court ruled that video games enjoy full free speech protections and that the regulation of violent game sales to minors is unconstitutional. The Supreme Court also referred to psychological research on violent video games as \"unpersuasive\" and noted that such research contains many methodological flaws. Recent reviews in many scholarly journals have come to similar conclusions, although much debate continues. Given past statements by the American Psychological Association linking video game and media violence with aggression, the Supreme Court ruling, particularly its critique of the science, is likely to be shocking and disappointing to some psychologists. One possible outcome is that the psychological community may increase the conclusiveness of their statements linking violent games to harm as a form of defensive reaction. However, in this article the author argues that the psychological community would be better served by reflecting on this research and considering whether the scientific process failed by permitting and even encouraging statements about video game violence that exceeded the data or ignored conflicting data. Although it is likely that debates on this issue will continue, a move toward caution and conservatism as well as increased dialogue between scholars on opposing sides of this debate will be necessary to restore scientific credibility. The current article reviews the involvement of the psychological science community in the Brown v. Entertainment Merchants Association case and suggests that it might learn from some of the errors in this case for the future.", "title": "" }, { "docid": "e045619ede30efb3338e6278f23001d7", "text": "Particle filtering has become a standard tool for non-parametric estimation in computer vision tracking applications. It is an instance of stochastic search. Each particle represents a possible state of the system. Higher concentration of particles at any given region of the search space implies higher probabilities. One of its major drawbacks is the exponential growth in the number of particles for increasing dimensions in the search space. We present a graph based filtering framework for hierarchical model tracking that is capable of substantially alleviate this issue. The method relies on dividing the search space in subspaces that can be estimated separately. Low correlated subspaces may be estimated with parallel, or serial, filters and have their probability distributions combined by a special aggregator filter. We describe a new algorithm to extract parameter groups, which define the subspaces, from the system model. We validate our method with different graph structures within a simple hand tracking experiment with both synthetic and real data", "title": "" }, { "docid": "134df9e78f54ddf0e1bd3b70b01d08eb", "text": "We consider the problem of learning a record matching package (classifier) in an active learning setting. In active learning, the learning algorithm picks the set of examples to be labeled, unlike more traditional passive learning setting where a user selects the labeled examples. Active learning is important for record matching since manually identifying a suitable set of labeled examples is difficult. Previous algorithms that use active learning for record matching have serious limitations: The packages that they learn lack quality guarantees and the algorithms do not scale to large input sizes. We present new algorithms for this problem that overcome these limitations. Our algorithms are fundamentally different from traditional active learning approaches, and are designed ground up to exploit problem characteristics specific to record matching. We include a detailed experimental evaluation on realworld data demonstrating the effectiveness of our algorithms.", "title": "" }, { "docid": "3d15103ad837b29d48b05b62d1358a07", "text": "Background: With the rapid population ageing that is occurring world-wide, there is increasing interest in “smart home” technologies that can assist older adults to continue living at home with safety and independence. This systematic review and critical evaluation of the world wide literature assesses the effectiveness and feasibility of smart-home technologies for promoting independence, health, well-being and quality of life, in older adults. Methods: A total of 1877 “smart home” publications were identified by the initial search of peer reviewed journals. Of these, 21 met our inclusion criteria for the review and were subject to data extraction and quality assessment. Results: Smart-home technologies included different types of active and passive sensors, monitoring devices, robotics and environmental control systems. One study assessed effectiveness of a smart home technology. Sixteen reported on the feasibility of smart-home technology and four were observational studies. Conclusion: Older adults were reported to readily accept smart-home technologies, especially if they benefited physical activity, independence and function and if privacy concerns were addressed. Given the modest number of objective analyses, there is a need for further scientific analysis of a range of smart home technologies to promote community living. rather than being hospitalized or institutionalized [10]. Smart-home technologies can also promote independent living and safety. This has the potential to optimize quality of life and reduce the stress on agedcare facilities and other health resources [13]. The challenge with smart-home technologies is to create a home environment that is safe and secure to reduce falls, disability, stress, fear or social isolation [14]. Contemporary smart home technology systems are versatile in function and user friendly. Smart home technologies usually aim to perform functions without disturbing the user and without causing any pain, inconvenience or movement restrictions. Martin and colleagues performed a preliminary analysis of the acceptance of smart-home technologies [15]. The results from this review were limited as no studies met inclusion criteria [15]. Given however, the rapid progression of new smart home technologies, a new systematic review of the literature is required. This paper addresses that need by analysing the range of studies undertaken to assess the impact of these technologies on the quality of life experienced by an ageing population accessing these supports. The broader context incorporates consideration of the social and emotional well-being needs of this population. The current review aimed to answer the following research question: “What is the effectiveness of smart-home technologies for Citation: Morris ME, Adair B, Miller K, Ozanne E, Hansen R, et al. (2013) Smart-Home Technologies to Assist Older People to Live Well at Home. Aging Sci 1: 101. doi:10.4172/jasc.1000101", "title": "" }, { "docid": "37651559403dca847dc0b4baed59d7d7", "text": "Reading strategies have been shown to improve comprehension levels, especially for readers lacking adequate prior knowledge. Just as the process of knowledge accumulation is time-consuming for human readers, it is resource-demanding to impart rich general domain knowledge into a language model via pre-training (Radford et al., 2018; Devlin et al., 2018). Inspired by reading strategies identified in cognitive science, and given limited computational resources — just a pre-trained model and a fixed number of training instances — we therefore propose three simple domain-independent strategies aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK AND FORTH READING that considers both the original and reverse order of an input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text embedding of tokens that are relevant to the question and candidate answers, and (iii) SELF-ASSESSMENT that generates practice questions and candidate answers directly from the text in an unsupervised manner. By fine-tuning a pre-trained language model (Radford et al., 2018) with our proposed strategies on the largest existing general domain multiple-choice MRC dataset RACE, we obtain a 5.8% absolute increase in accuracy over the previous best result achieved by the same pre-trained model fine-tuned on RACE without the use of strategies. We further fine-tune the resulting model on a target task, leading to new stateof-the-art results on six representative nonextractive MRC datasets from different domains (i.e., ARC, OpenBookQA, MCTest, MultiRC, SemEval-2018, and ROCStories). These results indicate the effectiveness of the proposed strategies and the versatility ∗ This work was done when the author was an intern at Tencent AI Lab and general applicability of our fine-tuned models that incorporate the strategies.", "title": "" }, { "docid": "9d3ca4966c26c6691398157a22531a1d", "text": "Bipedal locomotion skills are challenging to develop. Control strategies often use local linearization of the dynamics in conjunction with reduced-order abstractions to yield tractable solutions. In these model-based control strategies, the controller is often not fully aware of many details, including torque limits, joint limits, and other non-linearities that are necessarily excluded from the control computations for simplicity. Deep reinforcement learning (DRL) offers a promising model-free approach for controlling bipedal locomotion which can more fully exploit the dynamics. However, current results in the machine learning literature are often based on ad-hoc simulation models that are not based on corresponding hardware. Thus it remains unclear how well DRL will succeed on realizable bipedal robots. In this paper, we demonstrate the effectiveness of DRL using a realistic model of Cassie, a bipedal robot. By formulating a feedback control problem as finding the optimal policy for a Markov Decision Process, we are able to learn robust walking controllers that imitate a reference motion with DRL. Controllers for different walking speeds are learned by imitating simple time-scaled versions of the original reference motion. Controller robustness is demonstrated through several challenging tests, including sensory delay, walking blindly on irregular terrain and unexpected pushes at the pelvis. We also show we can interpolate between individual policies and that robustness can be improved with an interpolated policy.", "title": "" }, { "docid": "471579f955f8b68a357c8780a7775cc9", "text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.", "title": "" }, { "docid": "1377bac68319fcc57fbafe6c21e89107", "text": "In recent years, robotics in agriculture sector with its implementation based on precision agriculture concept is the newly emerging technology. The main reason behind automation of farming processes are saving the time and energy required for performing repetitive farming tasks and increasing the productivity of yield by treating every crop individually using precision farming concept. Designing of such robots is modeled based on particular approach and certain considerations of agriculture environment in which it is going to work. These considerations and different approaches are discussed in this paper. Also, prototype of an autonomous Agriculture Robot is presented which is specifically designed for seed sowing task only. It is a four wheeled vehicle which is controlled by LPC2148 microcontroller. Its working is based on the precision agriculture which enables efficient seed sowing at optimal depth and at optimal distances between crops and their rows, specific for each crop type.", "title": "" }, { "docid": "c649d226448782ee972c620bea3e0ea3", "text": "Parents of children with developmental disabilities, particularly autism spectrum disorders (ASDs), are at risk for high levels of distress. The factors contributing to this are unclear. This study investigated how child characteristics influence maternal parenting stress and psychological distress. Participants consisted of mothers and developmental-age matched preschool-aged children with ASD (N = 51) and developmental delay without autism (DD) ( N = 22). Evidence for higher levels of parenting stress and psychological distress was found in mothers in the ASD group compared to the DD group. Children's problem behavior was associated with increased parenting stress and psychological distress in mothers in the ASD and DD groups. This relationship was stronger in the DD group. Daily living skills were not related to parenting stress or psychological distress. Results suggest clinical services aiming to support parents should include a focus on reducing problem behaviors in children with developmental disabilities.", "title": "" }, { "docid": "b9ea38ab2c6c68af37a46d92c8501b68", "text": "In this paper we introduce a gamification model for encouraging sustainable multi-modal urban travel in modern European cities. Our aim is to provide a mechanism that encourages users to reflect on their current travel behaviours and to engage in more environmentally friendly activities that lead to the formation of sustainable, long-term travel behaviours. To achieve this our users track their own behaviours, set goals, manage their progress towards those goals, and respond to challenges. Our approach uses a point accumulation and level achievement metaphor to abstract from the underlying specifics of individual behaviours and goals to allow an Simon Wells University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: simon.wells@abdn.ac.uk Henri Kotkanen University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: henri.kotkanen@helsinki.fi Michael Schlafli University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: michael.schlafli@abdn.ac.uk Silvia Gabrielli CREATE-NET Via alla Cascata 56/D Povo 38123 Trento Italy e-mail: silvia.gabrielli@createnet.org Judith Masthoff University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: j.masthoff@abdn.ac.uk Antti Jylhä University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: antti.jylha@cs.helsinki.fi Paula Forbes University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: paula.forbes@abdn.ac.uk", "title": "" }, { "docid": "8921cffb633b0ea350b88a57ef0d4437", "text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.", "title": "" }, { "docid": "71c067065a5d3ada7f789798e0cf3424", "text": "Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heterogeneity, and large-scale geo-distribution. This paper provides an overview of existing security and privacy concerns, particularly for the fog computing. Afterward, this survey highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.", "title": "" } ]
scidocsrr
093a7deb32dc7c57d680d7c4a24818fd
Unsupervised Cross-Domain Transfer in Policy Gradient Reinforcement Learning via Manifold Alignment
[ { "docid": "217e76cc7d8a7d680b40d5c658460513", "text": "The reinforcement learning paradigm is a popular way to addr ess problems that have only limited environmental feedback, rather than correctly labeled exa mples, as is common in other machine learning contexts. While significant progress has been made t o improve learning in a single task, the idea oftransfer learninghas only recently been applied to reinforcement learning ta sks. The core idea of transfer is that experience gained in learning t o perform one task can help improve learning performance in a related, but different, task. In t his article we present a framework that classifies transfer learning methods in terms of their capab ilities and goals, and then use it to survey the existing literature, as well as to suggest future direct ions for transfer learning work.", "title": "" } ]
[ { "docid": "296da9be6a4b3c6d111f875157e196c8", "text": "Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.", "title": "" }, { "docid": "d80580490ac7d968ff08c2a9ee159028", "text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.", "title": "" }, { "docid": "d2a205f2a6c6deff5d9560af8cf8ff7f", "text": "MIDI files, when paired with corresponding audio recordings, can be used as ground truth for many music information retrieval tasks. We present a system which can efficiently match and align MIDI files to entries in a large corpus of audio content based solely on content, i.e., without using any metadata. The core of our approach is a convolutional network-based cross-modality hashing scheme which transforms feature matrices into sequences of vectors in a common Hamming space. Once represented in this way, we can efficiently perform large-scale dynamic time warping searches to match MIDI data to audio recordings. We evaluate our approach on the task of matching a huge corpus of MIDI files to the Million Song Dataset. 1. TRAINING DATA FOR MIR Central to the task of content-based Music Information Retrieval (MIR) is the curation of ground-truth data for tasks of interest (e.g. timestamped chord labels for automatic chord estimation, beat positions for beat tracking, prominent melody time series for melody extraction, etc.). The quantity and quality of this ground-truth is often instrumental in the success of MIR systems which utilize it as training data. Creating appropriate labels for a recording of a given song by hand typically requires person-hours on the order of the duration of the data, and so training data availability is a frequent bottleneck in content-based MIR tasks. MIDI files that are time-aligned to matching audio can provide ground-truth information [8,25] and can be utilized in score-informed source separation systems [9, 10]. A MIDI file can serve as a timed sequence of note annotations (a “piano roll”). It is much easier to estimate information such as beat locations, chord labels, or predominant melody from these representations than from an audio signal. A number of tools have been developed for inferring this kind of information from MIDI files [6, 7, 17, 19]. Halevy et al. [11] argue that some of the biggest successes in machine learning came about because “...a large training set of the input-output behavior that we seek to automate is available to us in the wild.” The motivation behind c Colin Raffel, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Daniel P. W. Ellis. “LargeScale Content-Based Matching of MIDI and Audio Files”, 16th International Society for Music Information Retrieval Conference, 2015. J/Jerseygi.mid", "title": "" }, { "docid": "3559528779be843cfbf1adc27cd795d8", "text": "Although monoclonal in origin, most tumors appear to contain a heterogeneous population of cancer cells. This observation is traditionally explained by postulating variations in tumor microenvironment and coexistence of multiple genetic subclones, created by progressive and divergent accumulation of independent somatic mutations. An additional explanation, however, envisages human tumors not as mere monoclonal expansions of transformed cells, but rather as complex tridimensional tissues where cancer cells become functionally heterogeneous as a result of differentiation. According to this second scenario, tumors act as caricatures of their corresponding normal tissues and are sustained in their growth by a pathological counterpart of normal adult stem cells, cancer stem cells. This model, first developed in human myeloid leukemias, is today being extended to solid tumors, such as breast and brain cancer. We review the biological basis and the therapeutic implications of the stem cell model of cancer.", "title": "" }, { "docid": "064505e942f5f8fd5f7e2db5359c7fe8", "text": "THE hopping of kangaroos is reminiscent of a bouncing ball or the action of a pogo stick. This suggests a significant storage and recovery of energy in elastic elements. One might surmise that the kangaroo's first hop would require a large amount of energy whereas subsequent hops could rely extensively on elastic rebound. If this were the case, then the kangaroo's unusual saltatory mode of locomotion should be an energetically inexpensive way to move.", "title": "" }, { "docid": "57823e3df6cf778c68eaac638a44a290", "text": "Understanding humans from photographs has always been a fundamental goal of computer vision. Early works focused on simple tasks such as detecting the location of individuals by means of bounding boxes. As the field progressed, harder and more higher level tasks have been undertaken. For example, from human detection came the 2D and 3D human pose estimation in which the task consisted of identifying the location in the image or space of all different body parts, e.g., head, torso, knees, arms, etc. Human attributes also became a great source of interest as they allow recognizing individuals and other properties such as gender or age. Later, the attention turned to the recognition of the action being performed. This, in general, relies on the previous works on pose estimation and attribute classification. Currently, even higher level tasks are being conducted such as predicting the motivations of human behaviour or identifying the fashionability of an individual from a photograph. In this thesis we have developed a hierarchy of tools that cover all these range of problems, from low level feature point descriptors to high level fashion-aware conditional random fields models, all with the objective of understanding humans from monocular RGB images. In order to build these high level models it is paramount to have a battery of robust and reliable low and mid level cues. Along these lines, we have proposed two low-level keypoint descriptors: one based on the theory of the heat diffusion on images, and the other that uses a convolutional neural network to learn discriminative image patch representations. We also introduce distinct low-level generative models for representing human pose: in particular we present a discrete model based on a directed acyclic graph and a continuous model that consists of poses clustered on a Riemannian manifold. As mid level cues we propose two 3D human pose estimation algorithms: one that estimates the 3D pose given a noisy 2D estimation, and an approach that simultaneously estimates both the 2D and 3D pose. Finally, we formulate higher level models built upon low and mid level cues for understanding humans from single images. Concretely, we focus on two different tasks in the context of fashion: semantic segmentation of clothing, and predicting the fashionability from images with metadata to ultimately provide fashion advice to the user. In summary, to robustly extract knowledge from images with the presence of humans it is necessary to build high level models that integrate low and mid level cues. In general, using and understanding strong features is critical for obtaining reliable performance. The main contribution of this thesis is in proposing a variety of low, mid and high level algorithms for human-centric images that can be integrated into higher level models for comprehending humans from photographs, as well as tackling novel fashionoriented problems.", "title": "" }, { "docid": "a3256a02981c661f47bb498487bf601c", "text": "Normative theorists of the public sphere, such as Jürgen Habermas, have been very critical of the ‘old’ mass media, which were seen as unable to promote free and plural societal communication. The advent of the internet, in contrast, gave rise to hopes that it would make previously marginalized actors and arguments more visible to a broader public. To assess these claims, this article compares the internet and mass media communication. It distinguishes three levels of both the offline and the online public sphere, which differ in their structural prerequisites, in their openness for participation and in their influence on the wider society. Using this model, the article compares the levels that are most strongly structured and most influential for the wider society: the mass media and communication as organized by search engines. Using human genome research and analysing Germany and the USA, the study looks at which actors, evaluations and frames are present in the print mass media and on websites, and finds that internet communication does not differ significantly from the offline debate in the print media.", "title": "" }, { "docid": "06c32ac33dd1d8fa7fef8788caae36b9", "text": "Context-aware Systems (CASs) are becoming increasingly popular and can be found in the areas of wearable computing, mobile computing, robotics, adaptive and intelligent user interfaces. Sensors are the corner stone of context capturing however, sensed context data are commonly prone to imperfection due to the technical limitations of sensors, their availability, dysfunction, and highly dynamic nature of environment. Consequently, sensed context data might be imprecise, erroneous, conflicting, or simply missing. To limit the impact of context imperfection on the behavior of a context-aware system, a notion of Quality of Context (QoC) is used to measure quality of any information that is used as context information. Adaptation is performed only if the context data used in the decision-making has an appropriate quality level. This paper reports an analytical review the state of the art on quality of context in context-aware systems and points to future research directions.", "title": "" }, { "docid": "b7062e40643ff1b879247a3f4ec3b07f", "text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION", "title": "" }, { "docid": "f3c1f43cd345669a6cb5a7ba6f1ca94c", "text": "Uric acid (UA) is the end product of purine metabolism and can reportedly act as an antioxidant. However, recently, numerous clinical and basic research approaches have revealed close associations of hyperuricemia with several disorders, particularly those comprising the metabolic syndrome. In this review, we first outline the two molecular mechanisms underlying inflammation occurrence in relation to UA metabolism; one is inflammasome activation by UA crystallization and the other involves superoxide free radicals generated by xanthine oxidase (XO). Importantly, recent studies have demonstrated the therapeutic or preventive effects of XO inhibitors against atherosclerosis and nonalcoholic steatohepatitis, which were not previously considered to be related, at least not directly, to hyperuricemia. Such beneficial effects of XO inhibitors have been reported for other organs including the kidneys and the heart. Thus, a major portion of this review focuses on the relationships between UA metabolism and the development of atherosclerosis, nonalcoholic steatohepatitis, and related disorders. Although further studies are necessary, XO inhibitors are a potentially novel strategy for reducing the risk of many forms of organ failure characteristic of the metabolic syndrome.", "title": "" }, { "docid": "4100daf390502bf3e6fe5aa3c313afb8", "text": "Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) form large, unstructured repositories. The goal of VIR is to retrieve the highest number of relevant matches to a given query (often expressed as an example image and/or a series of keywords). In its early years (1995-2000) the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the semantic gap (the lack of coincidence between an image's visual contents and its semantic interpretation) required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semantic gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on. In this tutorial, we present an overview of visual information retrieval (VIR) concepts, techniques, algorithms, and applications. Several topics are supported by examples written in Java, using Lucene (an open-source Java-based indexing and search implementation) and LIRE (Lucene Image REtrieval), an open-source Java-based library for content-based image retrieval (CBIR) written by Mathias Lux.\n After motivating the topic, we briefly review the fundamentals of information retrieval, present the most relevant and effective visual descriptors currently used in VIR, the most common indexing approaches for visual descriptors, the most prominent machine learning techniques used in connection with contemporary VIR solutions, as well as the challenges associated with building real-world, large scale VIR solutions, including a brief overview of publicly available datasets used in worldwide challenges, contests, and benchmarks. Throughout the tutorial, we integrate examples using LIRE, whose main features and design principles are also discussed. Finally, we conclude the tutorial with suggestions for deepening the knowledge in the topic, including a brief discussion of the most relevant advances, open challenges, and promising opportunities in VIR and related areas.\n The tutorial is primarily targeted at experienced Information Retrieval researchers and practitioners interested in extending their knowledge of document-based IR to equivalent concepts, techniques, and challenges in VIR. The acquired knowledge should allow participants to derive insightful conclusions and promising avenues for further investigation.", "title": "" }, { "docid": "0a9047c6dfe8dc7819e4d3772b823117", "text": "An increasing number of wireless applications rely on GPS signals for localization, navigation, and time synchronization. However, civilian GPS signals are known to be susceptible to spoofing attacks which make GPS receivers in range believe that they reside at locations different than their real physical locations. In this paper, we investigate the requirements for successful GPS spoofing attacks on individuals and groups of victims with civilian or military GPS receivers. In particular, we are interested in identifying from which locations and with which precision the attacker needs to generate its signals in order to successfully spoof the receivers. We will show, for example, that any number of receivers can easily be spoofed to one arbitrary location; however, the attacker is restricted to only few transmission locations when spoofing a group of receivers while preserving their constellation. In addition, we investigate the practical aspects of a satellite-lock takeover, in which a victim receives spoofed signals after first being locked on to legitimate GPS signals. Using a civilian GPS signal generator, we perform a set of experiments and find the minimal precision of the attacker's spoofing signals required for covert satellite-lock takeover.", "title": "" }, { "docid": "999ead7b9f02e4d2f9e3e81f61f37152", "text": "Successful long-term settlements on the Moon will need a supply of resources such as oxygen and water, yet the process of regularly transporting these resources from Earth would be prohibitively costly and dangerous. One alternative would be an approach using heterogeneous, autonomous robotic teams, which could collect and extract these resources from the surrounding environment (In-Situ Resource Utilization). The Whegs™ robotic platform, with its demonstrated capability to negotiate obstacles and traverse irregular terrain, is a good candidate for a lunar rover concept. In this research, Lunar Whegs™ is constructed as a proof-of-concept rover that would be able to navigate the surface of the moon, collect a quantity of regolith, and transport it back to a central processing station. The robot incorporates an actuated scoop, specialized feet for locomotion on loose substrates, Light Detection and Ranging (LIDAR) obstacle sensing and avoidance, and sealing and durability features for operation in an abrasive environment.", "title": "" }, { "docid": "4ba4930befdc19c32c4fb73abe35d141", "text": "Us enhance usab adaptivity and designers mod hindering thus level, increasin possibility of e aims at creat concepts and p literature, app user context an to create a ge This ontology alleviate the a download, is ex areas, person visualization.", "title": "" }, { "docid": "f86a64373a8a4bb510b92f5c38ed403e", "text": "In recent years, in-memory key-value storage systems have become more and more popular in solving real-time and interactive tasks. Compared with disks, memories have much higher throughput and lower latency which enables them to process data requests with much higher performance. However, since memories have much smaller capacity than disks, how to expand the capacity of in-memory storage system while maintain its high performance become a crucial problem. At the same time, since data in memories are non-persistent, the data may be lost when the system is down. In this paper, we make a case study with Redis, which is one popular in-memory key-value storage system. We find that although the latest release of Redis support clustering so that data can be stored in distributed nodes to support a larger storage capacity, its performance is limited by its decentralized design that clients usually need two connections to get their request served. To make the system more scalable, we propose a Clientside Key-to-Node Caching method that can help direct request to the right service node. Experimental results show that by applying this technique, it can significantly improve the system's performance by near 2 times. We also find that although Redis supports data replication on slave nodes to ensure data safety, it still gets a chance of losing a part of the data due to a weak consistency between master and slave nodes that its defective order of data replication and request reply may lead to losing data without notifying the client. To make it more reliable, we propose a Master-slave Semi Synchronization method which utilizes TCP protocol to ensure the order of data replication and request reply so that when a client receives an \"OK\" message, the corresponding data must have been replicated. With a significant improvement in data reliability, its performance overhead is limited within 5%.", "title": "" }, { "docid": "3fdd3c02460972f12bb12b7cf30e2af4", "text": "A small but growing North American trend is the publication of maps of crime on the Internet. A number of web sites allow observers to view the spatial distribution of crime in various American cities, often to a considerable resolution, and increasingly in an interactive format. The use of Geographical Information Systems (GIS) technology to map crime is a rapidly expanding field that is, as this paper will explain, still in a developmental stage, and a number of technical and ethical issues remain to be resolved. The public right to information about local crime has to be balanced by a respect for the privacy of crime victims. Various techniques are being developed to assist crime mappers to aggregate spatial data, both to make their product easier to comprehend and to protect identification of the addresses of crime victims. These data aggregation techniques, while preventing identification of individuals, may also be inadvertently producing maps with the appearance of ‘greater risk’ in low crime areas. When some types of crime mapping have the potential to cause falling house prices, increasing insurance premiums or business abandonment, conflicts may exist between providing a public service and protecting the individual, leaving the cartographer vulnerable to litigation.", "title": "" }, { "docid": "7064b7bf9baf4e59a99f9a4641af8430", "text": "A smart home needs to be human-centric, where it tries to fulfill human needs given the devices it has. Various works are developed to provide homes with reasoning and planning capability to fulfill goals, but most do not support complex sequence of plans or require significant manual effort in devising subplans. This is further aggravated by the need to optimize conflicting personal goals. A solution is to solve the planning problem represented as constraint satisfaction problem (CSP). But CSP uses hard constraints and, thus, cannot handle optimization and partial goal fulfillment efficiently. This paper aims to extend this approach to weighted CSP. Knowledge representation to help in generating planning rules is also proposed, as well as methods to improve performances. Case studies show that the system can provide intelligent and complex plans from activities generated from semantic annotations of the devices, as well as optimization to maximize personal constraints’ fulfillment. Note to Practitioners—Smart home should maximize the fulfillment of personal goals that are often conflicting. For example, it should try to fulfill as much as possible the requests made by both the mother and daughter who wants to watch TV but both having different channel preferences. That said, every person has a set of goals or constraints that they hope the smart home can fulfill. Therefore, human-centric system that automates the loosely coupled devices of the smart home to optimize the goals or constraints of individuals in the home is developed. Automated planning is done using converted services extracted from devices, where conversion is done using existing tools and concepts from Web technologies. Weighted constraint satisfaction that provides the declarative approach to cover large problem domain to realize the automated planner with optimization capability is proposed. Details to speed up planning through search space reduction are also given. Real-time case studies are run in a prototype smart home to demonstrate its applicability and intelligence, where every planning is performed under a maximum of 10 s. The vision of this paper is to be able to implement such system in a community, where devices everywhere can cooperate to ensure the well-being of the community.", "title": "" }, { "docid": "9e90e23aee87a181ca32a494e5d620e0", "text": "BACKGROUND\nThe rapid growth in the use of mobile phone applications (apps) provides the opportunity to increase access to evidence-based mental health care.\n\n\nOBJECTIVE\nOur goal was to systematically review the research evidence supporting the efficacy of mental health apps for mobile devices (such as smartphones and tablets) for all ages.\n\n\nMETHODS\nA comprehensive literature search (2008-2013) in MEDLINE, Embase, the Cochrane Central Register of Controlled Trials, PsycINFO, PsycTESTS, Compendex, and Inspec was conducted. We included trials that examined the effects of mental health apps (for depression, anxiety, substance use, sleep disturbances, suicidal behavior, self-harm, psychotic disorders, eating disorders, stress, and gambling) delivered on mobile devices with a pre- to posttest design or compared with a control group. The control group could consist of wait list, treatment-as-usual, or another recognized treatment.\n\n\nRESULTS\nIn total, 5464 abstracts were identified. Of those, 8 papers describing 5 apps targeting depression, anxiety, and substance abuse met the inclusion criteria. Four apps provided support from a mental health professional. Results showed significant reductions in depression, stress, and substance use. Within-group and between-group intention-to-treat effect sizes ranged from 0.29-2.28 and 0.01-0.48 at posttest and follow-up, respectively.\n\n\nCONCLUSIONS\nMental health apps have the potential to be effective and may significantly improve treatment accessibility. However, the majority of apps that are currently available lack scientific evidence about their efficacy. The public needs to be educated on how to identify the few evidence-based mental health apps available in the public domain to date. Further rigorous research is required to develop and test evidence-based programs. Given the small number of studies and participants included in this review, the high risk of bias, and unknown efficacy of long-term follow-up, current findings should be interpreted with caution, pending replication. Two of the 5 evidence-based mental health apps are currently commercially available in app stores.", "title": "" }, { "docid": "290fad1d2f0778ecb1807a461f8e8c2c", "text": "We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs. A prior on the latent variables expresses the preference for faster computation. The amount of computation for an input is determined via amortized maximum a posteriori (MAP) inference. MAP inference is performed using a novel stochastic variational optimization method. The recently proposed Adaptive Computation Time mechanism can be seen as an ad-hoc relaxation of this model. We demonstrate training using the generalpurpose Concrete relaxation of discrete variables. Evaluation on ResNet shows that our method matches the speed-accuracy trade-off of Adaptive Computation Time, while allowing for evaluation with a simple deterministic procedure that has a lower memory footprint.", "title": "" }, { "docid": "ce31be5bfeb05a30c5479a3192d20f93", "text": "Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as firstand second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on realworld networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-ofthe-art approaches for network embedding.", "title": "" } ]
scidocsrr
9181bb27ffb2f945a71bd2b68cd3f905
"...No one Can Hack My Mind": Comparing Expert and Non-Expert Security Practices
[ { "docid": "93df5e4d848158d82bd29a125e5f3c84", "text": "We empirically assess whether browser security warnings are as ineffective as suggested by popular opinion and previous literature. We used Mozilla Firefox and Google Chrome’s in-browser telemetry to observe over 25 million warning impressions in situ. During our field study, users continued through a tenth of Mozilla Firefox’s malware and phishing warnings, a quarter of Google Chrome’s malware and phishing warnings, and a third of Mozilla Firefox’s SSL warnings. This demonstrates that security warnings can be effective in practice; security experts and system architects should not dismiss the goal of communicating security information to end users. We also find that user behavior varies across warnings. In contrast to the other warnings, users continued through 70.2% of Google Chrome’s SSL warnings. This indicates that the user experience of a warning can have a significant impact on user behavior. Based on our findings, we make recommendations for warning designers and researchers.", "title": "" } ]
[ { "docid": "531ebcdbcfc606d315fac7ce7042c0b4", "text": "This paper reviews the potential for using trees for the phytoremediation of heavy metal-contaminated land. It considers the following aspects: metal tolerance in trees, heavy metal uptake by trees grown on contaminated substrates, heavy metal compartmentalisation within trees, phytoremediation using trees and the phytoremediation potential of willow (Salix spp.).", "title": "" }, { "docid": "261e2e70e33ed5284f802b37e4e2864a", "text": "Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number ofpossibly redundantinputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.", "title": "" }, { "docid": "2366ab0736d4d88cd61a578b9287f9f5", "text": "Scientific curiosity and fascination have played a key role in human research with psychedelics along with the hope that perceptual alterations and heightened insight could benefit well-being and play a role in the treatment of various neuropsychiatric disorders. These motivations need to be tempered by a realistic assessment of the hurdles to be cleared for therapeutic use. Development of a psychedelic drug for treatment of a serious psychiatric disorder presents substantial although not insurmountable challenges. While the varied psychedelic agents described in this chapter share some properties, they have a range of pharmacologic effects that are reflected in the gradation in intensity of hallucinogenic effects from the classical agents to DMT, MDMA, ketamine, dextromethorphan and new drugs with activity in the serotonergic system. The common link seems to be serotonergic effects modulated by NMDA and other neurotransmitter effects. The range of hallucinogens suggest that they are distinct pharmacologic agents and will not be equally safe or effective in therapeutic targets. Newly synthesized specific and selective agents modeled on the legacy agents may be worth considering. Defining therapeutic targets that represent unmet medical need, addressing market and commercial issues, and finding treatment settings to safely test and use such drugs make the human testing of psychedelics not only interesting but also very challenging. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "31049561dee81500d048641023bbb6dd", "text": "Today’s networks are filled with a massive and ever-growing variety of network functions that coupled with proprietary devices, which leads to network ossification and difficulty in network management and service provision. Network Function Virtualization (NFV) is a promising paradigm to change such situation by decoupling network functions from the underlying dedicated hardware and realizing them in the form of software, which are referred to as Virtual Network Functions (VNFs). Such decoupling introduces many benefits which include reduction of Capital Expenditure (CAPEX) and Operation Expense (OPEX), improved flexibility of service provision, etc. In this paper, we intend to present a comprehensive survey on NFV, which starts from the introduction of NFV motivations. Then, we explain the main concepts of NFV in terms of terminology, standardization and history, and how NFV differs from traditional middlebox based network. After that, the standard NFV architecture is introduced using a bottom up approach, based on which the corresponding use cases and solutions are also illustrated. In addition, due to the decoupling of network functionalities and hardware, people’s attention is gradually shifted to the VNFs. Next, we provide an extensive and in-depth discussion on state-of-the-art VNF algorithms including VNF placement, scheduling, migration, chaining and multicast. Finally, to accelerate the NFV deployment and avoid pitfalls as far as possible, we survey the challenges faced by NFV and the trend for future directions. In particular, the challenges are discussed from bottom up, which include hardware design, VNF deployment, VNF life cycle control, service chaining, performance evaluation, policy enforcement, energy efficiency, reliability and security, and the future directions are discussed around the current trend towards network softwarization. © 2018 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d3452781bd547de100287c01b24663a6", "text": "We describe the design, fabrication, and characterization of a novel pneumatic artificial muscle actuator with embedded contraction sensing. The muscle is composed of three main components: elastomer air chamber, embedded Kevlar threads, and a helical microchannel filled with a liquid conductor. When the air chamber is inflated with compressed air, the constrained length of the Kevlar threads causes the muscle to contract in the axial direction. During this contraction, the microchannel can detect the shape change of the muscle by sensing, the expansion of the air chamber. This sensing capability increases the controllability of pneumatic muscles. A novel manufacturing method is proposed to embed Kevlar threads and a helical microchannel in an elastomer tube. Then, a liquid metal is injected into the microchannel to make a soft sensor that can detect the geometrical change of the muscle. The muscle prototype was characterized to demonstrate its actuation and sensing capability.", "title": "" }, { "docid": "b540fb20a265d315503543a5d752f486", "text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.", "title": "" }, { "docid": "a46133149a577cf9175563d49ec8940d", "text": "Soft shadows, depth of field, and diffuse global illumination are common distribution effects, usually rendered by Monte Carlo ray tracing. Physically correct, noise-free images can require hundreds or thousands of ray samples per pixel, and take a long time to compute. Recent approaches have exploited sparse sampling and filtering; the filtering is either fast (axis-aligned), but requires more input samples, or needs fewer input samples but is very slow (sheared). We present a new approach for fast sheared filtering on the GPU. Our algorithm factors the 4D sheared filter into four 1D filters. We derive complexity bounds for our method, showing that the per-pixel complexity is reduced from O(n2 l2) to O(nl), where n is the linear filter width (filter size is O(n2)) and l is the (usually very small) number of samples for each dimension of the light or lens per pixel (spp is l2). We thus reduce sheared filtering overhead dramatically. We demonstrate rendering of depth of field, soft shadows and diffuse global illumination at interactive speeds. We reduce the number of samples needed by 5-8×, compared to axis-aligned filtering, and framerates are 4× faster for equal quality.", "title": "" }, { "docid": "3f2aa3cde019d56240efba61d52592a4", "text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.", "title": "" }, { "docid": "ad9c5cbb46a83e2b517fb548baf83ce0", "text": "Single-carrier frequency division multiple access (SC-FDMA) has been selected as the uplink access scheme in the UTRA Long Term Evolution (LTE) due to its low peak-to-average power ratio properties compared to orthogonal frequency division multiple access. Nevertheless, in order to achieve such a benefit, it requires a localized allocation of the resource blocks, which naturally imposes a severe constraint on the scheduler design. In this paper, three new channel-aware scheduling algorithms for SC-FDMA are proposed and evaluated in both local and wide area scenarios. Whereas the first maximum expansion (FME) and the recursive maximum expansion (RME) are relative simple solutions to the above-mentioned problem, the minimum area-difference to the envelope (MADE) is a more computational expensive approach, which, on the other hand, performs closer to the optimal combinatorial solution. Simulation results show that adopting a proportional fair metric all the proposed algorithms quickly reach a high level of data-rate fairness. At the same time, they definitely outperform the round-robin scheduling in terms of cell spectral efficiency with gains up to 68.8% in wide area environments.", "title": "" }, { "docid": "80c1f7e845e21513fc8eaf644b11bdc5", "text": "We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.", "title": "" }, { "docid": "b255a513fe6140fc9534087563efb36e", "text": "Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the \"complete information\" of a data item (taking into account the probability density function (pdf)) is utilized. We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted which show that the resulting classifiers are more accurate than those using value averages. Since processing pdfs is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.", "title": "" }, { "docid": "639cccdcd0294c3c32714d0a6e01ef35", "text": "The Center of Remote Sensing of Ice Sheets (CReSIS) is studying the use of the TETwalker mobile robot developed by NASA/Goddard Space Flight Center for polar seismic data acquisition. This paper discusses the design process for deploying seismic sensors within the 4-TETwalker mobile robot architecture. The 4-TETwalkerpsilas center payload node was chosen as the deployment medium. An alternative method of deploying seismic sensors that rest on the surface is included. Detailed models were also developed to study robot mobility dynamics and the deployment process. Finally, potential power options of solar sheaths and harvesting vibration energy are proposed.", "title": "" }, { "docid": "d60c51cf9ca05e5b1b176494572baaf3", "text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group-only, group–node, group–link, and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.", "title": "" }, { "docid": "7da0d66b512c79ebc00d676cac04eefc", "text": "Social psychologists have often followed other scientists in treating religiosity primarily as a set of beliefs held by individuals. But, beliefs are only one facet of this complex and multidimensional construct. The authors argue that social psychology can best contribute to scholarship on religion by being relentlessly social. They begin with a social-functionalist approach in which beliefs, rituals, and other aspects of religious practice are best understood as means of creating a moral community. They discuss the ways that religion is intertwined with five moral foundations, in particular the group-focused \"binding\" foundations of Ingroup/loyalty, Authority/respect, Purity/sanctity. The authors use this theoretical perspective to address three mysteries about religiosity, including why religious people are happier, why they are more charitable, and why most people in the world are religious.", "title": "" }, { "docid": "615891cdd2860247d7837634bc3478f8", "text": "An exact probabilistic formulation of the “square root law” conjectured byPrice is given and a probability distribution satisfying this law is defined, for which the namePrice distribution is suggested. Properties of thePrice distribution are discussed, including its relationship with the laws ofLotka andZipf. No empirical support of applicability ofPrice distribution as a model for publication productivity could be found.", "title": "" }, { "docid": "263cad972aed13952c3b68bd7ea12e8d", "text": "Multilevel thresholding is the most important method for image processing. Conventional multilevel thresholding methods have proven to be efficient in bi-level thresholding; however, when extended to multilevel thresholding, they prove to be computationally more costly, as they comprehensively search the optimal thresholds for the objective function. This paper presents a chaotic multi-verse optimizer (CMVO) algorithm using Kapur's objective function in order to determine the optimal multilevel thresholds for image segmentation. The proposed CMVO algorithm was applied to various standard test images, and evaluated by peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The CMVO algorithm efficiently and accurately searched multilevel thresholds and reduced the required computational times.", "title": "" }, { "docid": "e462c0cfc1af657cb012850de1b7b717", "text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.", "title": "" }, { "docid": "f5422fcf0046b189e3d6e78f98b98202", "text": "Muscle contraction during exercise, whether resistive or endurance in nature, has profound affects on muscle protein turnover that can persist for up to 72 h. It is well established that feeding during the postexercise period is required to bring about a positive net protein balance (muscle protein synthesis - muscle protein breakdown). There is mounting evidence that the timing of ingestion and the protein source during recovery independently regulate the protein synthetic response and influence the extent of muscle hypertrophy. Minor differences in muscle protein turnover appear to exist in young men and women; however, with aging there may be more substantial sex-based differences in response to both feeding and resistance exercise. The recognition of anabolic signaling pathways and molecules are also enhancing our understanding of the regulation of protein turnover following exercise perturbations. In this review we summarize the current understanding of muscle protein turnover in response to exercise and feeding and highlight potential sex-based dimorphisms. Furthermore, we examine the underlying anabolic signaling pathways and molecules that regulate these processes.", "title": "" }, { "docid": "3798374ed33c3d3255dcc7d7c78507c2", "text": "Cloud computing is characterized by shared infrastructure and a decoupling between its operators and tenants. These two characteristics impose new challenges to databases applications hosted in the cloud, namely: (i) how to price database services, (ii) how to isolate database tenants, and (iii) how to optimize database performance on this shared infrastructure. We argue that today’s solutions, based on virtual-machines, do not properly address these challenges. We hint at new research directions to tackle these problems and argue that these three challenges share a common need for accurate predictive models of performance and resource utilization. We present initial predictive models for the important class of OLTP/Web workloads and show how they can be used to address these challenges.", "title": "" }, { "docid": "22160219ffa40e4e42f1519fe25ecb6a", "text": "We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higherorder interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.", "title": "" } ]
scidocsrr
6570b5932fa1ab575920ce7fbf745d77
Genital Beautification: A Concept That Offers More Than Reduction of the Labia Minora
[ { "docid": "da93678f1b1070d68cfcbc9b7f6f88fe", "text": "Dermal fat grafts have been utilized in plastic surgery for both reconstructive and aesthetic purposes of the face, breast, and body. There are multiple reports in the literature on the male phallus augmentation with the use of dermal fat grafts. Few reports describe female genitalia aesthetic surgery, in particular rejuvenation of the labia majora. In this report we describe an indication and use of autologous dermal fat graft for labia majora augmentation in a patient with loss of tone and volume in the labia majora. We found that this procedure is an option for labia majora augmentation and provides a stable result in volume-restoration.", "title": "" } ]
[ { "docid": "7c593a9fc4de5beb89022f7d438ffcb8", "text": "The design of a low power low drop out voltage regulator with no off-chip capacitor and fast transient responses is presented in this paper. The LDO regulator uses a combination of a low power operational trans-conductance amplifier and comparators to drive the gate of the PMOS pass element. The amplifier ensures stability and accurate setting of the output voltage in addition to power supply rejection. The comparators ensure fast response of the regulator to any load or line transients. A settling time of less than 200ns is achieved in response to a load transient step of 50mA with a rise time of 100ns with an output voltage spike of less than 200mV at an output voltage of 3.25 V. A line transient step of 1V with a rise time of 100ns results also in a settling time of less than 400ns with a voltage spike of less than 100mV when the output voltage is 2.6V. The regulator is fabricated using a standard 0.35μm CMOS process and consumes a quiescent current of only 26 μA.", "title": "" }, { "docid": "7aeb10faf8590ed9f4054bafcd4dee0c", "text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.", "title": "" }, { "docid": "93f7a6057bf0f446152daf3233d000aa", "text": "Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.", "title": "" }, { "docid": "e2f57214cd2ec7b109563d60d354a70f", "text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .", "title": "" }, { "docid": "d5b4ba8e3491f4759924be4ceee8f418", "text": "Researchers and practitioners have long regarded procrastination as a self-handicapping and dysfunctional behavior. In the present study, the authors proposed that not all procrastination behaviors either are harmful or lead to negative consequences. Specifically, the authors differentiated two types of procrastinators: passive procrastinators versus active procrastinators. Passive procrastinators are procrastinators in the traditional sense. They are paralyzed by their indecision to act and fail to complete tasks on time. In contrast, active procrastinators are a \"positive\" type of procrastinator. They prefer to work under pressure, and they make deliberate decisions to procrastinate. The present results showed that although active procrastinators procrastinate to the same degree as passive procrastinators, they are more similar to nonprocrastinators than to passive procrastinators in terms of purposive use of time, control of time, self-efficacy belief, coping styles, and outcomes including academic performance. The present findings offer a more sophisticated understanding of procrastination behavior and indicate a need to reevaluate its implications for outcomes of individuals.", "title": "" }, { "docid": "4cef84bb3a1ff5f5ed64a4149d501f57", "text": "In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is the intelligence exhibited by machines or software. It is the subfield of computer science. Artificial Intelligence is becoming a popular field in computer science as it has enhanced the human life in many areas. Artificial intelligence in the last two decades has greatly improved performance of the manufacturing and service systems. Study in the area of artificial intelligence has given rise to the rapidly growing technology known as expert system. Application areas of Artificial Intelligence is having a huge impact on various fields of life as expert system is widely used these days to solve the complex problems in various areas as science, engineering, business, medicine, weather forecasting. The areas employing the technology of Artificial Intelligence have seen an increase in the quality and efficiency. This paper gives an overview of this technology and the application areas of this technology. This paper will also explore the current use of Artificial Intelligence technologies in the PSS design to damp the power system oscillations caused by interruptions, in Network Intrusion for protecting computer and communication networks from intruders, in the medical areamedicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it and in the computer games.", "title": "" }, { "docid": "3c135cae8654812b2a4f805cec78132e", "text": "Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of  ~78% input similarity and  ~59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations.\n Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.", "title": "" }, { "docid": "2a17f4c307fac8491410295640b5133c", "text": "This work adopts a standard Denavit–Hartenberg method to model a PUMA 560 spot welding robot as the object of study. The forward and inverse kinematics solutions are then analyzed. To address the shortcomings of the ant colony algorithm, factors from the particle swarm optimization and the genetic algorithm are introduced into this algorithm. Subsequently, the resulting hybrid algorithm and the ant colony algorithm are used to conduct trajectory planning in the shortest path. Experimental data and simulation result show that the hybrid algorithm is significantly better in terms of initial solution speed and optimal solution quality than the ant colony algorithm. The feasibility and effectiveness of the hybrid algorithm in the trajectory planning of a robot are thus verified.", "title": "" }, { "docid": "fcf8649ff7c2972e6ef73f837a3d3f4d", "text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.", "title": "" }, { "docid": "0c3ba78197c6d0f605b3b54149908705", "text": "A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.", "title": "" }, { "docid": "9ba3c67136d573c4a10b133a2391d8bc", "text": "Modern text collections often contain large documents that span several subject areas. Such documents are problematic for relevance feedback since inappropriate terms can easi 1y be chosen. This study explores the highly effective approach of feeding back passages of large documents. A less-expensive method that discards long documents is also reviewed and found to be effective if there are enough relevant documents. A hybrid approach that feeds back short documents and passages of long documents may be the best compromise.", "title": "" }, { "docid": "3bb63838b4795c62b2c8e123daec2d7f", "text": "To compare the quality of helical computed tomography (CT) images of the pelvis in patients with metal hip prostheses reconstructed using adaptive iterative dose reduction (AIDR) and AIDR with single-energy metal artifact reduction (SEMAR-A). This retrospective study included 28 patients (mean age, 64.6 ± 11.4 years; 6 men and 22 women). CT images were reconstructed using AIDR and SEMAR-A. Two radiologists evaluated the extent of metal artifacts and the depiction of structures in the pelvic region and looked for mass lesions. A radiologist placed a region of interest within the bladder and recorded CT attenuation. The metal artifacts were significantly reduced in SEMAR-A as compared to AIDR (p < 0.0001). The depictions of the bladder, ureter, prostate/uterus, rectum, and pelvic sidewall were significantly better with SEMAR-A than with AIDR (p < 0.02). All lesions were diagnosed with SEMAR-A, while some were not diagnosed with AIDR. The median and interquartile range (in parentheses) of CT attenuation within the bladder for AIDR were −34.0 (−46.6 to −15.0) Hounsfield units (HU) and were more variable than those seen for SEMAR-A [5.4 (−1.3 to 11.1)] HU (p = 0.033). In comparison with AIDR, SEMAR-A provided pelvic CT images of significantly better quality for patients with metal hip prostheses.", "title": "" }, { "docid": "485f2cb9c34afe5fc19e2c4cc0a1ce54", "text": "INTRODUCTION\nTo report our technique and experience in using a minimally invasive approach for aesthetic lateral canthoplasty.\n\n\nMETHODS\nRetrospective analysis of patients undergoing lateral canthoplasty through a minimally invasive, upper eyelid crease incision approach at Jules Stein Eye Institute by one surgeon (R.A.G.) between 2005 and 2008. Concomitant surgical procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were analyzed and graded for functional and cosmetic outcomes.\n\n\nRESULTS\nA total of 600 patients (1,050 eyelids) underwent successful lateral canthoplasty through a small incision in the upper eyelid crease to correct lower eyelid malposition (laxity, ectropion, entropion, retraction) and/or lateral canthal dystopia, encompassing 806 reconstructive and 244 cosmetic lateral canthoplasties. There were 260 males and 340 females, with mean age of 55 years old (range, 4-92 years old). Minimum follow-up time was 3 months (mean, 6 months; maximum, 6 years). Complications were rare and minor, including transient postoperative chemosis. Eighteen patients underwent reoperation in the following 2 years for recurrent lower eyelid malposition and/or lateral canthal deformity.\n\n\nCONCLUSIONS\nLateral canthoplasty through a minimally invasive upper eyelid crease incision and resuspension technique can effectively address lower eyelid laxity and/or dystopia, resulting in an aesthetic lateral canthus.", "title": "" }, { "docid": "db2937f923ef0a58e993729a05e6fb91", "text": "The visual attention (VA) span is defined as the amount of distinct visual elements which can be processed in parallel in a multi-element array. Both recent empirical data and theoretical accounts suggest that a VA span deficit might contribute to developmental dyslexia, independently of a phonological disorder. In this study, this hypothesis was assessed in two large samples of French and British dyslexic children whose performance was compared to that of chronological-age matched control children. Results of the French study show that the VA span capacities account for a substantial amount of unique variance in reading, as do phonological skills. The British study replicates this finding and further reveals that the contribution of the VA span to reading performance remains even after controlling IQ, verbal fluency, vocabulary and single letter identification skills, in addition to phoneme awareness. In both studies, most dyslexic children exhibit a selective phonological or VA span disorder. Overall, these findings support a multi-factorial view of developmental dyslexia. In many cases, developmental reading disorders do not seem to be due to phonological disorders. We propose that a VA span deficit is a likely alternative underlying cognitive deficit in dyslexia.", "title": "" }, { "docid": "63e3be30835fd8f544adbff7f23e13ab", "text": "Deaths due to plastic bag suffocation or plastic bag asphyxia are not reported in Malaysia. In the West many suicides by plastic bag asphyxia, particularly in the elderly and those who are chronically and terminally ill, have been reported. Accidental deaths too are not uncommon in the West, both among small children who play with shopping bags and adolescents who are solvent abusers. Another well-known but not so common form of accidental death from plastic bag asphyxia is sexual asphyxia, which is mostly seen among adult males. Homicide by plastic bag asphyxia too is reported in the West and the victims are invariably infants or adults who are frail or terminally ill and who cannot struggle. Two deaths due to plastic bag asphyxia are presented. Both the autopsies were performed at the University Hospital Mortuary, Kuala Lumpur. Both victims were 50-year old married Chinese males. One death was diagnosed as suicide and the other as sexual asphyxia. Sexual asphyxia is generally believed to be a problem associated exclusively with the West. Specific autopsy findings are often absent in deaths due to plastic bag asphyxia and therefore such deaths could be missed when some interested parties have altered the scene and most importantly have removed the plastic bag. A visit to the scene of death is invariably useful.", "title": "" }, { "docid": "748d71e6832288cd0120400d6069bf50", "text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull", "title": "" }, { "docid": "c55de58c07352373570ec7d46c5df03d", "text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.", "title": "" }, { "docid": "53a49412d75190357df5d159b11843f0", "text": "Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate thatusing human-like abductive learningthe machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.", "title": "" }, { "docid": "e5f2101e7937c61a4d6b11d4525a7ed8", "text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.", "title": "" }, { "docid": "c408992e89867e583b8232b18f37edf0", "text": "Fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene parsing and data fusion for a 3D-LIDAR scanner (Velodyne HDL-64E) and a video camera is described. First of all, a geometry segmentation algorithm is proposed for detection of obstacles and ground areas from data collected by the Velodyne scanner. Then, corresponding image collected by the video camera is classified patch by patch into more detailed categories. After that, parsing result of each frame is obtained by fusing result of Velodyne data and that of image using the fuzzy logic inference framework. Finally, parsing results of consecutive frames are smoothed by the Markov random field based temporal fusion method. The proposed approach has been evaluated with datasets collected by our autonomous ground vehicle testbed in both rural and urban areas. The fused results are more reliable than that acquired via analysis of only images or Velodyne data. 2013 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
c80ddb45bcff0f43fdb0b6a7e4659462
Extinction-Based Shading and Illumination in GPU Volume Ray-Casting
[ { "docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37", "text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.", "title": "" } ]
[ { "docid": "b825426604420620e1bba43c0f45115e", "text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.", "title": "" }, { "docid": "6e2efc26a47be54ff2bffd1c01e54ca5", "text": "In recent years, cyber attacks have caused substantial financial losses and been able to stop fundamental public services. Among the serious attacks, Advanced Persistent Threat (APT) has emerged as a big challenge to the cyber security hitting selected companies and organisations. The main objectives of APT are data exfiltration and intelligence appropriation. As part of the APT life cycle, an attacker creates a Point of Entry (PoE) to the target network. This is usually achieved by installing malware on the targeted machine to leave a back-door open for future access. A common technique employed to breach into the network, which involves the use of social engineering, is the spear phishing email. These phishing emails may contain disguised executable files. This paper presents the disguised executable file detection (DeFD) module, which aims at detecting disguised exe files transferred over the network connections. The detection is based on a comparison between the MIME type of the transferred file and the file name extension. This module was experimentally evaluated and the results show a successful detection of disguised executable files.", "title": "" }, { "docid": "158de7fe10f35a78e4b62d2bc46d9b0d", "text": "The Internet of Things promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-machine communications aims to provide the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.", "title": "" }, { "docid": "e10886264acb1698b36c4d04cf2d9df6", "text": "† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "857e9430ebc5cf6aad2737a0ce10941e", "text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.", "title": "" }, { "docid": "adf530152b474c2b6147da07acf3d70d", "text": "One of the basic services in a distributed network is clock synchronization as it enables a palette of services, such as synchronized measurements, coordinated actions, or time-based access to a shared communication medium. The IEEE 1588 standard defines the Precision Time Protocol (PTP) and provides a framework to synchronize multiple slave clocks to a master by means of synchronization event messages. While PTP is capable for synchronization accuracies below 1 ns, practical synchronization approaches are hitting a new barrier due to asymmetric line delays. Although compensation fields for the asymmetry are present in PTP version 2008, no specific measures to estimate the asymmetry are defined in the standard. In this paper we present a solution to estimate the line asymmetry in 100Base-TX networks based on line swapping. This approach seems appealing for existing installations as most Ethernet PHYs have the line swapping feature built in, and it only delays the network startup, but does not alter the operation of the network. We show by an FPGA-based prototype system that our approach is able to improve the synchronization offset from more than 10 ns down to below 200 ps.", "title": "" }, { "docid": "b39904ccd087e59794cf2cc02e5d2644", "text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.", "title": "" }, { "docid": "6d84b1ef838301a4c0f9136dffb1082f", "text": "Power analysis is critical in research designs. This study discusses a simulation-based approach utilizing the likelihood ratio test to estimate the power of growth curve analysis. The power estimation is implemented through a set of SAS macros. The application of the SAS macros is demonstrated through several examples, including missing data and nonlinear growth trajectory situations. The results of the examples indicate that the power of growth curve analysis increases with the increase of sample sizes, effect sizes, and numbers of measurement occasions. In addition, missing data can reduce power. The SAS macros can be modified to accommodate more complex power analysis for both linear and nonlinear growth curve models.", "title": "" }, { "docid": "e5d474fc8c0d2c97cc798eda4f9c52dd", "text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.", "title": "" }, { "docid": "ae527d90981c371c4807799802dbc5a8", "text": "We present our efforts to deploy mobile robots in office environments, focusing in particular on the challenge of planning a schedule for a robot to accomplish user-requested actions. We concretely aim to make our CoBot mobile robots available to execute navigational tasks requested by users, such as telepresence, and picking up and delivering messages or objects at different locations. We contribute an efficient web-based approach in which users can request and schedule the execution of specific tasks. The scheduling problem is converted to a mixed integer programming problem. The robot executes the scheduled tasks using a synthetic speech and touch-screen interface to interact with users, while allowing users to follow the task execution online. Our robot uses a robust Kinect-based safe navigation algorithm, moves fully autonomously without the need to be chaperoned by anyone, and is robust to the presence of moving humans, as well as non-trivial obstacles, such as legged chairs and tables. Our robots have already performed 15km of autonomous service tasks. Introduction and Related Work We envision a system in which autonomous mobile robots robustly perform service tasks in indoor environments. The robots perform tasks which are requested by building residents over the web, such as delivering mail, fetching coffee, or guiding visitors. To fulfill the users’ requests, we must plan a schedule of when the robot will execute each task in accordance with the constraints specified by the users. Many efforts have used the web to access robots, including the early examples of the teleoperation of a robotic arm (Goldberg et al. 1995; Taylor and Trevelyan 1995) and interfacing with a mobile robot (e.g, (Simmons et al. 1997; Siegwart and Saucy 1999; Saucy and Mondada 2000; Schulz et al. 2000)), among others. The robot Xavier (Simmons et al. 1997; 2000) allowed users to make requests over the web for the robot to go to specific places, and other mobile robots soon followed (Siegwart and Saucy 1999; Grange, Fong, and Baur 2000; Saucy and Mondada 2000; Schulz et al. 2000). The RoboCup@Home initiative (Visser and Burkhard 2007) provides competition setups for indoor Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: CoBot-2, an omnidirectional mobile robot for indoor users. service autonomous robots, with an increasingly wide scope of challenges focusing on robot autonomy and verbal interaction with users. In this work, we present our architecture to effectively make a fully autonomous indoor service robot available to general users. We focus on the problem of planning a schedule for the robot, and present a mixed integer linear programming solution for planning a schedule. We ground our work on the CoBot-2 platform1, shown in Figure 1. CoBot-2 autonomously localizes and navigates in a multi-floor office environment while effectively avoiding obstacles (Biswas and Veloso 2010). The robot carries a variety of sensing and computing devices, including a camera, a Kinect depthcamera, a Hokuyo LIDAR, a touch-screen tablet, microphones, speakers, and wireless communication. CoBot-2 executes tasks sent by users over the web, and we have devised a user-friendly web interface that allows users to specify tasks. Currently, the robot executes three types of tasks: a GoToRoom task where the robot visits a location, a Telepresence task where the robot goes to a location CoBot-2 was designed and built by Michael Licitra, mlicitra@cmu.edu, as a scaled-up version of the CMDragons small-size soccer robots, also designed and built by him. 27 Automated Action Planning for Autonomous Mobile Robots: Papers from the 2011 AAAI Workshop (WS-11-09)", "title": "" }, { "docid": "dfc5f6899ceeb886b4197f3b70b7f6e7", "text": "In cognitive radio networks, the secondary users can use the frequency bands when the primary users are not present. Hence secondary users need to constantly sense the presence of the primary users. When the primary users are detected, the secondary users have to vacate that channel. This makes the probability of detection important to the primary users as it indicates their protection level from secondary users. When the secondary users detect the presence of a primary user which is in fact not there, it is referred to as false alarm. The probability of false alarm is important to the secondary users as it determines their usage of an unoccupied channel. Depending on whose interest is of priority, either a targeted probability of detection or false alarm shall be set. After setting one of the probabilities, the other can be optimized through cooperative sensing. In this paper, we show that cooperating all secondary users in the network does not necessary achieve the optimum performance, but instead, it is achieved by cooperating a certain number of users with the highest primary user's signal to noise ratio. Computer simulations have shown that the Pd can increase from 92.03% to 99.88% and Pf can decrease from 6.02% to 0.06% in a network with 200 users.", "title": "" }, { "docid": "29aa73eec85fd015a3a5f4679209c2d4", "text": "We present a broadband waveguide ortho-mode transducer for the WR10 band that was designed for CLOVER, an astrophysics experiment aiming to characterize the polarization of the cosmic microwave background radiation. The design, based on a turnstile junction, was manufactured and then tested using a millimeter-wave vector network analyzer. The average measured return loss and isolation were -22 dB and -45 dB, respectively, across the entire WR10 band", "title": "" }, { "docid": "d3562d7a7dafeb4971563d90e4c31fd6", "text": "A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using “verb”, “noun”, and “verb prep” lexico-syntactic patterns. Humanbased evaluation shows that the accuracy of the harvested information is about 90%. We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge.", "title": "" }, { "docid": "77af12d87cd5827f35d92968d1888162", "text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.", "title": "" }, { "docid": "7a1aa7db367a45ff48fb31f1c04b7fef", "text": "As the size of software systems increases, the algorithms and data structures of the computation no longer constitute the major design problems. When systems are constructed from many components, the organization of the overall system—the software architecture—presents a new set of design problems. This level of design has been addressed in a number of ways including informal diagrams and descriptive terms, module interconnection languages, templates and frameworks for systems that serve the needs of specific domains, and formal models of component integration mechanisms. In this paper we provide an introduction to the emerging field of software architecture. We begin by considering a number of common architectural styles upon which many systems are currently based and show how different styles can be combined in a single design. Then we present six case studies to illustrate how architectural representations can improve our understanding of complex software systems. Finally, we survey some of the outstanding problems in the field, and consider a few of the promising research directions.", "title": "" }, { "docid": "587f58f291732bfb8954e34564ba76fd", "text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.", "title": "" }, { "docid": "8f79bd3f51ec54a3e86553514881088c", "text": "A time series is a sequence of observations collected over fixed sampling intervals. Several real-world dynamic processes can be modeled as a time series, such as stock price movements, exchange rates, temperatures, among others. As a special kind of data stream, a time series may present concept drift, which affects negatively time series analysis and forecasting. Explicit drift detection methods based on monitoring the time series features may provide a better understanding of how concepts evolve over time than methods based on monitoring the forecasting error of a base predictor. In this paper, we propose an online explicit drift detection method that identifies concept drifts in time series by monitoring time series features, called Feature Extraction for Explicit Concept Drift Detection (FEDD). Computational experiments showed that FEDD performed better than error-based approaches in several linear and nonlinear artificial time series with abrupt and gradual concept drifts.", "title": "" }, { "docid": "42584c93c05f512bc2f0bc8d73e90cc8", "text": "This sketch describes a new, flexible, natural, intuitive, volumetric modeling and animation technique that combines implicit functions with turbulence-based procedural techniques. A cloud is modeled to demonstrate its advantages.", "title": "" }, { "docid": "c7a985966fb6a04a712c67bf2580af61", "text": "There is much knowledge about Business models (BM) (Zott 2009, Zott 2010, Zott 2011, Fielt 2011, Teece 2010, Lindgren 2013) but very little knowledge and research about Business Model Eco system (BMES) – those “ecosystems” where the BM’s really operates and works as value-adding mechanism – objects or “species”. How are these BMES actually constructed – How do they function – what are their characteristics and How can we really define these BMES? There are until now not an accepted language developed for BMES’s nor is the term BMES generally accepted in the BM Literature. This paper intends to commence the journey of building up such language on behalf of case studies within the Wind Mill, Health-, Agriculture-, and Fair line of BMES. A preliminary study of “AS IS” and “TO BE” BM’s related to these BMES present our first findings and preliminary understanding of BMES. The paper attempt to define what is a BMES and the dimensions and components of BMES. In this context we build upon a comprehensive review of academic business and BM literature together with an analogy study to ecological eco systems and ecosystem frameworks. We commence exploring the origin of the term business, BM and ecosystems and then relate this to a proposed BMES framework and the concept of the Multi BM framework (Lindgren 2013).", "title": "" } ]
scidocsrr
b355c99c3db8c7848945e7f65029433c
Stream Compilation for Real-Time Embedded Multicore Systems
[ { "docid": "22d5dd06ca164aa0b012b0764d7c4440", "text": "As multicore architectures enter the mainstream, there is a pressing demand for high-level programming models that can effectively map to them. Stream programming offers an attractive way to expose coarse-grained parallelism, as streaming applications (image, video, DSP, etc.) are naturally represented by independent filters that communicate over explicit data channels.In this paper, we demonstrate an end-to-end stream compiler that attains robust multicore performance in the face of varying application characteristics. As benchmarks exhibit different amounts of task, data, and pipeline parallelism, we exploit all types of parallelism in a unified manner in order to achieve this generality. Our compiler, which maps from the StreamIt language to the 16-core Raw architecture, attains a 11.2x mean speedup over a single-core baseline, and a 1.84x speedup over our previous work.", "title": "" } ]
[ { "docid": "bde516c748dcd4a9b16ec8228220fa90", "text": "BACKGROUND\nFew studies on foreskin development and the practice of circumcision have been done in Chinese boys. This study aimed to determine the natural development process of foreskin in children.\n\n\nMETHODS\nA total of 10 421 boys aged 0 to 18 years were studied. The condition of foreskin was classified into type I (phimosis), type II (partial phimosis), type III (adhesion of prepuce), type IV (normal), and type V (circumcised). Other abnormalities of the genitalia were also determined.\n\n\nRESULTS\nThe incidence of a completely retractile foreskin increased from 0% at birth to 42.26% in adolescence; however, the phimosis rate decreased with age from 99.7% to 6.81%. Other abnormalities included web penis, concealed penis, cryptorchidism, hydrocele, micropenis, inguinal hernia, and hypospadias.\n\n\nCONCLUSIONS\nIncomplete separation of foreskin is common in children. Since it is a natural phenomenon to approach the adult condition until puberty, circumcision should be performed with cautions in children.", "title": "" }, { "docid": "edacac86802497e0e43c4a03bfd3b925", "text": "This paper presents a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm, which provides accurate and robust localization within the globally consistent map in real time on a standard CPU. This is achieved by firstly performing the visual-inertial extended kalman filter(EKF) to provide motion estimate at a high rate. However the filter becomes inconsistent due to the well known linearization issues. So we perform a keyframe-based visual-inertial bundle adjustment to improve the consistency and accuracy of the system. In addition, a loop closure detection and correction module is also added to eliminate the accumulated drift when revisiting an area. Finally, the optimized motion estimates and map are fed back to the EKF-based visual-inertial odometry module, thus the inconsistency and estimation error of the EKF estimator are reduced. In this way, the system can continuously provide reliable motion estimates for the long-term operation. The performance of the algorithm is validated on public datasets and real-world experiments, which proves the superiority of the proposed algorithm.", "title": "" }, { "docid": "a1f838270925e4769e15edfb37b281fd", "text": "Assess extensor carpi ulnaris (ECU) tendon position in the ulnar groove, determine the frequency of tendon “dislocation” with the forearm prone, neutral, and supine, and determine if an association exists between ulnar groove morphology and tendon position in asymptomatic volunteers. Axial proton density-weighted MR was performed through the distal radioulnar joint with the forearm prone, neutral, and supine in 38 asymptomatic wrists. The percentage of the tendon located beyond the ulnar-most border of the ulnar groove was recorded. Ulnar groove depth and length was measured and ECU tendon signal was assessed. 15.8 % of tendons remained within the groove in all forearm positions. In 76.3 %, the tendon translated medially from prone to supine. The tendon “dislocated” in 0, 10.5, and 39.5 % with the forearm prone, neutral and supine, respectively. In 7.9 % prone, 5.3 % neutral, and 10.5 % supine exams, the tendon was 51–99 % beyond the ulnar border of the ulnar groove. Mean ulnar groove depth and length were 1.6 and 7.7 mm, respectively, with an overall trend towards greater degrees of tendon translation in shorter, shallower ulnar grooves. The ECU tendon shifts in a medial direction when the forearm is supine; however, tendon “dislocation” has not been previously documented in asymptomatic volunteers. The ECU tendon medially translated or frankly dislocated from the ulnar groove in the majority of our asymptomatic volunteers, particularly when the forearm is supine. Overall greater degrees of tendon translation were observed in shorter and shallower ulnar grooves.", "title": "" }, { "docid": "55f80d7b459342a41bb36a5c0f6f7e0d", "text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.", "title": "" }, { "docid": "6d0ae5d9d8cff434cfaabe476d608cb6", "text": "10.1 INTRODUCTION Pulse compression involves the transmission of a long coded pulse and the processing of the received echo to obtain a relatively narrow pulse. The increased detection capability of a long-pulse radar system is achieved while retaining the range resolution capability of a narrow-pulse system. Several advantages are obtained. Transmission of long pulses permits a more efficient use of the average power capability of the radar. Generation of high peak power signals is avoided. The average power of the radar may be increased without increasing the pulse repetition frequency (PRF) and, hence, decreasing the radar's unambiguous range. An increased system resolving capability in doppler is also obtained as a result of the use of the long pulse. In addition, the radar is less vulnerable to interfering signals that differ from the coded transmitted signal. A long pulse may be generated from a narrow pulse. A narrow pulse contains a large number of frequency components with a precise phase relationship between them. If the relative phases are changed by a phase-distorting filter, the frequency components combine to produce a stretched, or expanded, pulse. This expanded pulse is the pulse that is transmitted. The received echo is processed in the receiver by a compression filter. The compression filter readjusts the relative phases of the frequency components so that a narrow or compressed pulse is again produced. The pulse compression ratio is the ratio of the width of the expanded pulse to that of the compressed pulse. The pulse compression ratio is also equal to the product of the time duration and the spectral bandwidth (time-bandwidth product) of the transmitted signal. A pulse compression radar is a practical implementation of a matched-filter system. The coded signal may be represented either as a frequency response H(U) or as an impulse time response h(i) of a coding filter. In Fig. 10. Ia 9 the coded signal is obtained by exciting the coding filter //(<*>) with a unit impulse. The received signal is fed to the matched filter, whose frequency response is the complex conjugate #*(a>) of the coding filter. The output of the matched-filter section is the compressed pulse, which is given by the inverse Fourier transform of the product of the signal spectrum //(a>) and the matched-filter response //*(o>):", "title": "" }, { "docid": "9feac5bf882c3e812755f87a21a59652", "text": "In 2013, George Church and his colleagues at Harvard University [2] in Cambridge, Massachusetts published \"RNA-Guided Human Genome Engineering via Cas 9,\" in which they detailed their use of RNA-guided Cas 9 to genetically modify genes [3] in human cells. Researchers use RNA-guided Cas 9 technology to modify the genetic information of organisms, DNA, by targeting specific sequences of DNA and subsequently replacing those targeted sequences with different DNA sequences. Church and his team used RNA-guided Cas 9 technology to edit the genetic information in human cells. Church and his colleagues also created a database that identified 190,000 unique guide RNAs for targeting almost half of the human genome [4] that codes for proteins. In \"RNA-Guided Human Genome Engineering via Cas 9,\" the authors demonstrated that RNA-guided Cas 9 was a robust and simple tool for genetic engineering, which has enabled scientists to more easily manipulate genomes for the study of biological processes and genetic diseases.", "title": "" }, { "docid": "6ba2aed7930d4c7fee807a0f4904ddc5", "text": "This work is released in biometric field and has as goal, development of a full automatic fingerprint identification system based on support vector machine. Promising Results of first experiences pushed us to develop codification and recognition algorithms which are specifically associated to this system. In this context, works were consecrated on algorithm developing of the original image processing, minutiae and singular points localization; Gabor filters coding and testing these algorithms on well known databases which are: FVC2004 databases & FingerCell database. Performance Evaluating has proved that SVM achieved a good recognition rate in comparing with results obtained using a classic neural network RBF. Keywords—Biometry, Core and Delta points Detection, Gabor filters coding, Image processing and Support vector machine.", "title": "" }, { "docid": "b192bce1472ba8392af48982fde5da20", "text": "This paper presents a new setup and investigates neural model predictive and variable structure controllers designed to control the single-degree-of-freedom rotary manipulator actuated by shape memory alloy (SMA). SMAs are a special group of metallic materials and have been widely used in the robotic field because of their particular mechanical and electrical characteristics. SMA-actuated manipulators exhibit severe hysteresis, so the controllers should confront this problem and make the manipulator track the desired angle. In this paper, first, a mathematical model of the SMA-actuated robot manipulator is proposed and simulated. The controllers are then designed. The results set out the high performance of the proposed controllers. Finally, stability analysis for the closed-loop system is derived based on the dissipativity theory.", "title": "" }, { "docid": "7c4444cba23e78f7159e336638947189", "text": "Certification of keys and attributes is in practice typically realized by a hierarchy of issuers. Revealing the full chain of issuers for certificate verification, however, can be a privacy issue since it can leak sensitive information about the issuer's organizational structure or about the certificate owner. Delegatable anonymous credentials solve this problem and allow one to hide the full delegation (issuance) chain, providing privacy during both delegation and presentation of certificates. However, the existing delegatable credentials schemes are not efficient enough for practical use.\n In this paper, we present the first hierarchical (or delegatable) anonymous credential system that is practical. To this end, we provide a surprisingly simple ideal functionality for delegatable credentials and present a generic construction that we prove secure in the UC model. We then give a concrete instantiation using a recent pairing-based signature scheme by Groth and describe a number of optimizations and efficiency improvements that can be made when implementing our concrete scheme. The latter might be of independent interest for other pairing-based schemes as well. Finally, we report on an implementation of our scheme in the context of transaction authentication for blockchain, and provide concrete performance figures.", "title": "" }, { "docid": "58920ab34e358c13612d793bb3127c9f", "text": "We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standard Wald confidence interval has previously been remarked on in the literature (Blyth and Still, Agresti and Coull, Santner and others). We begin by showing that the chaotic coverage properties of the Wald interval are far more persistent than is appreciated. Furthermore, common textbook prescriptions regarding its safety are misleading and defective in several respects and cannot be trusted. This leads us to consideration of alternative intervals. A number of natural alternatives are presented, each with its motivation and context. Each interval is examined for its coverage probability and its length. Based on this analysis, we recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n. We also provide an additional frequentist justification for use of the Jeffreys interval.", "title": "" }, { "docid": "83b79fc95e90a303f29a44ef8730a93f", "text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.", "title": "" }, { "docid": "4d5e72046bfd44b9dc06dfd02812f2d6", "text": "Recommender systems in the last decade opened new interactive channels between buyers and sellers leading to new concepts involved in the marketing strategies and remarkable positive gains in online sales. Businesses intensively aim to maintain customer loyalty, satisfaction and retention; such strategic longterm values need to be addressed by recommender systems in a more tangible and deeper manner. The reason behind the considerable growth of recommender systems is for tracking and analyzing the buyer behavior on the one to one basis to present items on the web that meet his preference, which is the core concept of personalization. Personalization is always related to the relationship between item and user leaving out the contextual information about this relationship. User's buying decision is not only affected by the presented item, but also influenced by its price and the context in which the item is presented, such as time or place. Recently, new system has been designed based on the concept of utilizing price personalization in the recommendation process. This system is newly coined as personalized pricing recommender system (PPRS). We propose personalized pricing recommender system with a novel approach of calculating consumer online real value to determine dynamically his personalized discount, which can be generically applied on the normal price of any recommend item through its predefined discount rules.", "title": "" }, { "docid": "f3820e94a204cd07b04e905a9b1e4834", "text": "Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players’ champion-specific skills are three prominent skill components influencing LoL’s match outcomes, while those of DOTA2 are mainly impacted by in-game avatars’ base skills but not much by the other two. PLAYER SKILL DECOMPOSITION IN MULTIPLAYER ONLINE BATTLE ARENAS 3 Player Skill Decomposition in Multiplayer Online Battle Arenas", "title": "" }, { "docid": "77df82cf7a9ddca2038433fa96a43cef", "text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.", "title": "" }, { "docid": "538ae92edc07057ff0b40c9c657deba4", "text": "Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.", "title": "" }, { "docid": "429f27ab8039a9e720e9122f5b1e3bea", "text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.", "title": "" }, { "docid": "1faaf86a7f43f6921d8c754fbc9ea0e1", "text": "Department of Mechanical Engineering, Politécnica/COPPE, Federal University of Rio de Janeiro, UFRJ, Cid. Universitaria, Cx. Postal: 68503, Rio de Janeiro, RJ, 21941-972, Brazil, helcio@mecanica.ufrj.br, colaco@ufrj.br, wellingtonuff@yahoo.com.br, hmassardf@gmail.com Department of Mechanical and Materials Engineering, Florida International University, 10555 West Flagler Street, EC 3462, Miami, Florida 33174, U.S.A., dulikrav@fiu.edu Department of Subsea Technology, Petrobras Research and Development Center – CENPES, Av. Horácio Macedo, 950, Cidade Universitária, Ilha do Fundão, 21941-915, Rio de Janeiro, RJ, Brazil, fvianna@petrobras.com.br Université de Toulouse ; Mines Albi ; CNRS; Centre RAPSODEE, Campus Jarlard, F-81013 Albi cedex 09, France, olivier.fudym@enstimac.fr", "title": "" }, { "docid": "fdfbcacd5a31038ecc025315c7483b5a", "text": "Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. Œis paper explores the e‚ectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial e‚orts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more e‚ective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this €nding, we conducted a manual user evaluation, which con€rms that answers from the CNN are detectably beŠer than those from idf-weighted word overlap. Œis result suggests that users are sensitive to relatively small di‚erences in answer selection quality.", "title": "" }, { "docid": "ba1d1f2cfeac871bf63164cb0b431af9", "text": "The motivation behind model-driven software development is to move the focus of work from programming to solution modeling. The model-driven approach has a potential to increase development productivity and quality by describing important aspects of a solution with more human-friendly abstractions and by generating common application fragments with templates. For this vision to become reality, software development tools need to automate the many tasks of model construction and transformation, including construction and transformation of models that can be round-trip engineered into code. In this article, we briefly examine different approaches to model transformation and offer recommendations on the desirable characteristics of a language for describing model transformations. In doing so, we are hoping to offer a measuring stick for judging the quality of future model transformation technologies.", "title": "" }, { "docid": "6a7bc6a1f1d9486304edac87635dc0e9", "text": "We exploit the falloff of acuity in the visual periphery to accelerate graphics computation by a factor of 5-6 on a desktop HD display (1920x1080). Our method tracks the user's gaze point and renders three image layers around it at progressively higher angular size but lower sampling rate. The three layers are then magnified to display resolution and smoothly composited. We develop a general and efficient antialiasing algorithm easily retrofitted into existing graphics code to minimize \"twinkling\" artifacts in the lower-resolution layers. A standard psychophysical model for acuity falloff assumes that minimum detectable angular size increases linearly as a function of eccentricity. Given the slope characterizing this falloff, we automatically compute layer sizes and sampling rates. The result looks like a full-resolution image but reduces the number of pixels shaded by a factor of 10-15.\n We performed a user study to validate these results. It identifies two levels of foveation quality: a more conservative one in which users reported foveated rendering quality as equivalent to or better than non-foveated when directly shown both, and a more aggressive one in which users were unable to correctly label as increasing or decreasing a short quality progression relative to a high-quality foveated reference. Based on this user study, we obtain a slope value for the model of 1.32-1.65 arc minutes per degree of eccentricity. This allows us to predict two future advantages of foveated rendering: (1) bigger savings with larger, sharper displays than exist currently (e.g. 100 times speedup at a field of view of 70° and resolution matching foveal acuity), and (2) a roughly linear (rather than quadratic or worse) increase in rendering cost with increasing display field of view, for planar displays at a constant sharpness.", "title": "" } ]
scidocsrr
47fa8f965db462af83e698c285fb175e
A Review of Deep Learning Methods Applied on Load Forecasting
[ { "docid": "299fb603f3a87d88e7fe8eeb7cf73089", "text": "Interest in using artificial neural networks (ANNs) for forecasting has led to a tremendous surge in research activities in the past decade. While ANNs provide a great deal of promise, they also embody much uncertainty. Researchers to date are still not certain about the effect of key factors on forecasting performance of ANNs. This paper presents a state-of-the-art survey of ANN applications in forecasting. Our purpose is to provide (1) a synthesis of published research in this area, (2) insights on ANN modeling issues, and (3) the future research directions.  1998 Elsevier Science B.V.", "title": "" }, { "docid": "094892f048414f99a910373862011de8", "text": "Power forecasting of renewable energy power plants is a very active research field, as reliable information about the future power generation allow for a safe operation of the power grid and helps to minimize the operational costs of these energy sources. Deep Learning algorithms have shown to be very powerful in forecasting tasks, such as economic time series or speech recognition. Up to now, Deep Learning algorithms have only been applied sparsely for forecasting renewable energy power plants. By using different Deep Learning and Artificial Neural Network algorithms, such as Deep Belief Networks, AutoEncoder, and LSTM, we introduce these powerful algorithms in the field of renewable energy power forecasting. In our experiments, we used combinations of these algorithms to show their forecast strength compared to a standard MLP and a physical forecasting model in the forecasting the energy output of 21 solar power plants. Our results using Deep Learning algorithms show a superior forecasting performance compared to Artificial Neural Networks as well as other reference models such as physical models.", "title": "" }, { "docid": "8e04bf942fb88dfb1636b70ccb69e88f", "text": "In this paper, for the first time, an ensemble of deep learning belief networks (DBN) is proposed for regression and time series forecasting. Another novel contribution is to aggregate the outputs from various DBNs by a support vector regression (SVR) model. We show the advantage of the proposed method on three electricity load demand datasets, one artificial time series dataset and three regression datasets over other benchmark methods.", "title": "" }, { "docid": "7aaa9cb86b17fdd5672677eefb17bf76", "text": "Although many methods are available to forecast short-term electricity load based on small scale data sets, they may not be able to accommodate large data sets as electricity load data becomes bigger and more complex in recent years. In this paper, a novel machine learning model combining convolutional neural network with K-means clustering is proposed for short-term load forecasting with improved scalability. The large data set is clustered into subsets using K-means algorithm, then the obtained subsets are used to train the convolutional neural network. A real-world power industry data set containing more than 1.4 million of load records is used in this study and the experimental results demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "6661b5f660235cdefd31a7ce4b5f312e", "text": "Residential load forecasting has been playing an increasingly important role in modern smart grids. Due to the variability of residents’ activities, individual residential loads are usually too volatile to forecast accurately. A long short-term memory-based deep-learning forecasting framework with appliance consumption sequences is proposed to address such volatile problem. It is shown that the forecasting accuracy can be notably improved by including appliance measurements in the training data. The effectiveness of the proposed method is validated through extensive comparison studies on a real-world dataset.", "title": "" } ]
[ { "docid": "803b3d29c5514865cd8e17971f2dd8d6", "text": "This paper comprehensively analyzes the relationship between space-vector modulation and three-phase carrier-based pulsewidth modualtion (PWM). The relationships involved, such as the relationship between modulation signals (including zero-sequence component and fundamental components) and space vectors, the relationship between the modulation signals and the space-vector sectors, the relationship between the switching pattern of space-vector modulation and the type of carrier, and the relationship between the distribution of zero vectors and different zero-sequence signal are systematically established. All the relationships provide a bidirectional bridge for the transformation between carrier-based PWM modulators and space-vector modulation modulators. It is shown that all the drawn conclusions are independent of the load type. Furthermore, the implementations of both space-vector modulation and carrier-based PWM in a closed-loop feedback converter are discussed.", "title": "" }, { "docid": "f2edf7cc3671b38ae5f597e840eda3a2", "text": "This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships.", "title": "" }, { "docid": "f93c47dae193e00ca9fc052028b6167f", "text": "© International Association for Applied Psychology, 2005. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Blackwell Publishing, Ltd. Oxford, UK APPS pplied Psychology: an International Review 0269-994X © Int rnational Association for Applied Psychology, 2005 ri 2005 54 2 riginal Arti le SELF-REGULATION IN THE CLASS OOM OEKA RTS and CORNO Self-Regulation in the Classroom: A Perspective on Assessment and Intervention", "title": "" }, { "docid": "a15e57a3153d5ef4286e1932bf65e0c0", "text": "We introduce Snap, a framework for packet processing that outperforms traditional software routers by exploiting the parallelism available on modern GPUs. While obtaining high performance, it remains extremely flexible, with packet processing tasks implemented as simple modular elements that are composed to build fully functional routers and switches. Snap is based on the Click modular router, which it extends by adding new architectural features that support batched packet processing, memory structures optimized for offloading to coprocessors, and asynchronous scheduling with in-order completion. We show that Snap can run complex pipelines at high speeds on commodity PC hardware by building an IP router incorporating both an IDS-like full-packet string matcher and an SDN-like packet classifier. In this configuration, Snap is able to forward 40 million packets per second, saturating four 10 Gbps NICs at packet sizes as small as 128 byes. This represents an increase in throughput of nearly 4x over the baseline Click running comparable elements on the CPU.", "title": "" }, { "docid": "9363421f524b4990c5314298a7e56e80", "text": "hree years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats 1. Google Brain's discovery that the Inter-net is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language. Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again. Such advances make for exciting times in THE LEARNING MACHINES Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.", "title": "" }, { "docid": "70ea3e32d4928e7fd174b417ec8b6d0e", "text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.", "title": "" }, { "docid": "9c05452b964c67b8f79ce7dfda4a72e5", "text": "The Internet is evolving rapidly toward the future Internet of Things (IoT) which will potentially connect billions or even trillions of edge devices which could generate huge amount of data at a very high speed and some of the applications may require very low latency. The traditional cloud infrastructure will run into a series of difficulties due to centralized computation, storage, and networking in a small number of datacenters, and due to the relative long distance between the edge devices and the remote datacenters. To tackle this challenge, edge cloud and edge computing seem to be a promising possibility which provides resources closer to the resource-poor edge IoT devices and potentially can nurture a new IoT innovation ecosystem. Such prospect is enabled by a series of emerging technologies, including network function virtualization and software defined networking. In this survey paper, we investigate the key rationale, the state-of-the-art efforts, the key enabling technologies and research topics, and typical IoT applications benefiting from edge cloud. We aim to draw an overall picture of both ongoing research efforts and future possible research directions through comprehensive discussions.", "title": "" }, { "docid": "2477e41b180e29112e9d10cecd021034", "text": "OBJECTIVE\nResearch in both animals and humans indicates that cannabidiol (CBD) has antipsychotic properties. The authors assessed the safety and effectiveness of CBD in patients with schizophrenia.\n\n\nMETHOD\nIn an exploratory double-blind parallel-group trial, patients with schizophrenia were randomized in a 1:1 ratio to receive CBD (1000 mg/day; N=43) or placebo (N=45) alongside their existing antipsychotic medication. Participants were assessed before and after treatment using the Positive and Negative Syndrome Scale (PANSS), the Brief Assessment of Cognition in Schizophrenia (BACS), the Global Assessment of Functioning scale (GAF), and the improvement and severity scales of the Clinical Global Impressions Scale (CGI-I and CGI-S).\n\n\nRESULTS\nAfter 6 weeks of treatment, compared with the placebo group, the CBD group had lower levels of positive psychotic symptoms (PANSS: treatment difference=-1.4, 95% CI=-2.5, -0.2) and were more likely to have been rated as improved (CGI-I: treatment difference=-0.5, 95% CI=-0.8, -0.1) and as not severely unwell (CGI-S: treatment difference=-0.3, 95% CI=-0.5, 0.0) by the treating clinician. Patients who received CBD also showed greater improvements that fell short of statistical significance in cognitive performance (BACS: treatment difference=1.31, 95% CI=-0.10, 2.72) and in overall functioning (GAF: treatment difference=3.0, 95% CI=-0.4, 6.4). CBD was well tolerated, and rates of adverse events were similar between the CBD and placebo groups.\n\n\nCONCLUSIONS\nThese findings suggest that CBD has beneficial effects in patients with schizophrenia. As CBD's effects do not appear to depend on dopamine receptor antagonism, this agent may represent a new class of treatment for the disorder.", "title": "" }, { "docid": "045a4622691d1ae85593abccb823b193", "text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).", "title": "" }, { "docid": "f2b6afabd67354280d091d11e8265b96", "text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS", "title": "" }, { "docid": "106e6eb9bfd9cf4f64487270901093f0", "text": "Neural Machine Translation (NMT) has recently attracted a l ot of attention due to the very high performance achieved by deep neural network s in other domains. An inherent weakness in existing NMT systems is their inabil ity to correctly translate rare words: end-to-end NMTs tend to have relatively sma ll vocabularies with a single “unknown-word” symbol representing every possibl e out-of-vocabulary (OOV) word. In this paper, we propose and implement a simple t echnique to address this problem. We train an NMT system on data that is augm ented by the output of a word alignment algorithm, allowing the NMT syste m to output, for each OOV word in the target sentence, its corresponding word in the source sentence. This information is later utilized in a post-process ing step that translates every OOV word using a dictionary. Our experiments on the WMT ’14 English to French translation task show that this simple method prov ides a substantial improvement over an equivalent NMT system that does not use thi technique. The performance of our system achieves a BLEU score of 36.9, whic h improves the previous best end-to-end NMT by 2.1 points. Our model matche s t performance of the state-of-the-art system while using three times less data.", "title": "" }, { "docid": "9167fbdd1fe4d5c17ffeaf50c6fd32b7", "text": "For many networked games, such as the Defense of the Ancients and StarCraft series, the unofficial leagues created by players themselves greatly enhance user-experience, and extend the success of each game. Understanding the social structure that players of these game s implicitly form helps to create innovative gaming services to the benefit of both players and game operators. But how to extract and analyse the implicit social structure? We address this question by first proposing a formalism consisting of various ways to map interaction to social structure, and apply this to real-world data collected from three different game genres. We analyse the implications of these mappings for in-game and gaming-related services, ranging from network and socially-aware matchmaking of players, to an investigation of social network robustnes against player departure.", "title": "" }, { "docid": "175f82940aa18fe390d1ef03835de8cc", "text": "We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the users active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.", "title": "" }, { "docid": "fc3d4b4ac0d13b34aeadf5806013689d", "text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.", "title": "" }, { "docid": "ca5ce2d6239bb66ced6d56fd6a4e4c70", "text": "Bare-metal clouds are an emerging and attractive platform for cloud users who demand extreme computer performance. Bare-metal clouds lease physical machines rather than virtual machines, eliminating a virtualization overhead and providing maximum computer hardware performance. Therefore, bare-metal clouds are suitable for applications that require intensive, consistent, and predictable performance, such as big-data and high-performance computing applications. Unfortunately, existing bare-metal clouds do not support live migration because they lack virtualization layers. Live migration is an essential feature for bare-metal cloud vendors to perform proactive maintenance and fault tolerance that can avoid long user application downtime when underlying physical hardware is about to fail. Existing live migration approaches require either a virtualization overhead or OS-dependence and are therefore unsuitable for bare-metal clouds. This paper introduces an OS-independent live migration scheme for bare-metal clouds. We utilize a very thin hypervisor layer that does not virtualize hardware and directly exposes physical hardware to a guest OS. During live migration, the hypervisor carefully monitors and controls access to physical devices to capture, transfer, and restore the device states while the guest OS is still controlling the devices. After live migration, the hypervisor does almost nothing to eliminate the virtualization overhead and provide bare-metal performance for the guest OS. Experimental results confirmed that network performance of our system was comparable with that of bare-metal machines.", "title": "" }, { "docid": "dc6fe019c28ed63f435f295534f944a1", "text": "Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. Already in the pioneering days of computational models of neural cognition, the question was raised how symbolic knowledge can be represented and dealt with within neural networks. The landmark paper [McCulloch and Pitts, 1943] provides fundamental insights how propositional logic can be processed using simple artificial neural networks. Within the following decades, however, the topic did not receive much attention as research in artificial intelligence initially focused on purely symbolic approaches. The power of machine learning using artificial neural networking was not recognized until the 80s, when in particular the backpropagation algorithm [Rumelhart et al., 1986] made connectionist learning feasible and applicable in practice. These advances indicated a breakthrough in machine learning which quickly led to industrial-strength applications in areas such as image analysis, speech and pattern recognition, investment analysis, engine monitoring, fault diagnosis, etc. During a training process from raw data, artificial neural networks acquire expert knowledge about the problem domain, and the ability to generalize this knowledge to similar but previously unencountered situations in a way which often surpasses the abilities of human experts. The knowledge obtained during the training process, however, is hidden within", "title": "" }, { "docid": "b32e1d3474c5db96f188981b29cbb9c0", "text": "An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors – which must search for their own bounding box, and which cannot estimate that box very accurately – makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.", "title": "" }, { "docid": "59d17595e0535a086cb019afe696089c", "text": "In this paper we present a novel disk-based distributed column-store, describe its architecture and discuss a number of technical solutions. Our system is essentially a query engine which was written completely from scratch. It is aimed for shared-nothing environments and supports different forms of parallel query processing. Query processing in PosDB is organized according to the classic Volcano pull-based model which is adapted for the column-store case. Currently, we support late materialization only, and therefore employ a join index data structure to represent positional information. In our system query plan can consist of both positional and value operators. PosDB has about a dozen of core operators among which several variants of selections and joins, aggregation. We also have several operators that ensure intra-query parallelism and operators for network interoperability. In its current state the system is fully capable of processing the Star Schema Benchmark in a local and distributed environment.", "title": "" }, { "docid": "217dfc849cea5e0d80555790362af2e7", "text": "Research examining online political forums has until now been overwhelmingly guided by two broad perspectives: (1) a deliberative conception of democratic communication and (2) a diverse collection of incommensurable multi-sphere approaches. While these literatures have contributed many insightful observations, their disadvantages have left many interesting communicative dynamics largely unexplored. This article seeks to introduce a new framework for evaluating online political forums (based on the work of Jürgen Habermas and Lincoln Dahlberg) that addresses the shortcomings of prior approaches by identifying three distinct, overlapping models of democracy that forums may manifest: the liberal, the communitarian and the deliberative democratic. For each model, a set of definitional variables drawn from the broader online forum literature is documented and discussed.", "title": "" } ]
scidocsrr
309e6a9f5bdabc69181b6194cff9f9e0
Effective Strip Noise Removal for Low-Textured Infrared Images Based on 1-D Guided Filtering
[ { "docid": "e4c5df9b038e69c5ddd919e4284c07b0", "text": "Many computer vision tasks can be formulated as labeling problems. The desired solution is often a spatially smooth labeling where label transitions are aligned with color edges of the input image. We show that such solutions can be efficiently achieved by smoothing the label costs with a very fast edge-preserving filter. In this paper, we propose a generic and simple framework comprising three steps: 1) constructing a cost volume, 2) fast cost volume filtering, and 3) Winner-Takes-All label selection. Our main contribution is to show that with such a simple framework state-of-the-art results can be achieved for several computer vision applications. In particular, we achieve 1) disparity maps in real time whose quality exceeds those of all other fast (local) approaches on the Middlebury stereo benchmark, and 2) optical flow fields which contain very fine structures as well as large displacements. To demonstrate robustness, the few parameters of our framework are set to nearly identical values for both applications. Also, competitive results for interactive image segmentation are presented. With this work, we hope to inspire other researchers to leverage this framework to other application areas.", "title": "" }, { "docid": "0771cd99e6ad19deb30b5c70b5c98183", "text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "title": "" } ]
[ { "docid": "514884603994f56573696002f5bd6599", "text": "Mobile multimedia networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for different applications. Those services run on dynamic topologies due to device mobility or failures and wireless channel impairments, such as mobile robots or Unmanned Aerial Vehicle (UAV) environments for rescue or surveillance missions. In those scenarios, beaconless Opportunistic Routing (OR) allows increasing the robustness of systems for supporting routing decisions in a completely distributed manner. Moreover, the addition of a cross-layer scheme enhances the benefits of a beaconless OR, and also enables multimedia dissemination with Quality of Experience (QoE) support. However, existing beaconless OR approaches do not support a reliable and efficient cross-layer scheme to enable effective multimedia transmission under topology changes, increasing the packet loss rate, and thus reducing the video quality level based on the user’s experience. This article proposes a Link quality and Geographical beaconless OR protocol for efficient video dissemination for mobile multimedia IoT, called LinGO. This protocol relies on a beaconless OR approach and uses multiple metrics for routing decisions, including link quality, geographic location, and energy. A QoE/video-aware optimisation scheme allows increasing the packet delivery rate in presence of links errors, by adding redundant video packets based on the frame importance from the human’s point-of-view. Simulation results show that LinGO delivers live video flows with QoE support and robustness in mobile and dynamic topologies, as needed in future IoT environments. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "42e4f07ccb9673b32d7c2368cc013eac", "text": "This paper proposes a framework to aid video analysts in detecting suspicious activity within the tremendous amounts of video data that exists in today's world of omnipresent surveillance video. Ideas and techniques for closing the semantic gap between low-level machine readable features of video data and high-level events seen by a human observer are discussed. An evaluation of the event classification and diction technique is presented and future an experiment to refine this technique is proposed. These experiments are used as a lead to a discussion on the most optimal machine learning algorithm to learn the event representation scheme proposed in this paper.", "title": "" }, { "docid": "189cc09c72686ae7282eef04c1b365f1", "text": "With the rapid growth of the internet as well as increasingly more accessible mobile devices, the amount of information being generated each day is enormous. We have many popular websites such as Yelp, TripAdvisor, Grubhub etc. that offer user ratings and reviews for different restaurants in the world. In most cases, though, the user is just interested in a small subset of the available information, enough to get a general overview of the restaurant and its popular dishes. In this paper, we present a way to mine user reviews to suggest popular dishes for each restaurant. Specifically, we propose a method that extracts and categorize dishes from Yelp restaurant reviews, and then ranks them to recommend the most popular dishes.", "title": "" }, { "docid": "d4cf614c352b3bbef18d7f219a3da2d1", "text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.", "title": "" }, { "docid": "0762f2778f3d9f7da10b8c51b2ff7ff5", "text": "We propose a real-time, robust to outliers and accurate solution to the Perspective-n-Point (PnP) problem. The main advantages of our solution are twofold: first, it in- tegrates the outlier rejection within the pose estimation pipeline with a negligible computational overhead, and sec- ond, its scalability to arbitrarily large number of correspon- dences. Given a set of 3D-to-2D matches, we formulate pose estimation problem as a low-rank homogeneous sys- tem where the solution lies on its 1D null space. Outlier correspondences are those rows of the linear system which perturb the null space and are progressively detected by projecting them on an iteratively estimated solution of the null space. Since our outlier removal process is based on an algebraic criterion which does not require computing the full-pose and reprojecting back all 3D points on the image plane at each step, we achieve speed gains of more than 100&times; compared to RANSAC strategies. An extensive exper- imental evaluation will show that our solution yields accu- rate results in situations with up to 50% of outliers, and can process more than 1000 correspondences in less than 5ms.", "title": "" }, { "docid": "0822720d8bb0222bd7f0f758fa93ff9d", "text": "Hydrogen can be recovered by fermentation of organic material rich in carbohydrates, but much of the organic matter remains in the form of acetate and butyrate. An alternative to methane production from this organic matter is the direct generation of electricity in a microbial fuel cell (MFC). Electricity generation using a single-chambered MFC was examined using acetate or butyrate. Power generated with acetate (800 mg/L) (506 mW/m2 or 12.7 mW/ L) was up to 66% higher than that fed with butyrate (1000 mg/L) (305 mW/m2 or 7.6 mW/L), demonstrating that acetate is a preferred aqueous substrate for electricity generation in MFCs. Power output as a function of substrate concentration was well described by saturation kinetics, although maximum power densities varied with the circuit load. Maximum power densities and half-saturation constants were Pmax ) 661 mW/m2 and Ks ) 141 mg/L for acetate (218 Ω) and Pmax ) 349 mW/m2 and Ks ) 93 mg/L for butyrate (1000 Ω). Similar open circuit potentials were obtained in using acetate (798 mV) or butyrate (795 mV). Current densities measured for stable power output were higher for acetate (2.2 A/m2) than those measured in MFCs using butyrate (0.77 A/m2). Cyclic voltammograms suggested that the main mechanism of power production in these batch tests was by direct transfer of electrons to the electrode by bacteria growing on the electrode and not by bacteria-produced mediators. Coulombic efficiencies and overall energy recovery were 10-31 and 3-7% for acetate and 8-15 and 2-5% for butyrate, indicating substantial electron and energy losses to processes other than electricity generation. These results demonstrate that electricity generation is possible from soluble fermentation end products such as acetate and butyrate, but energy recoveries should be increased to improve the overall process performance.", "title": "" }, { "docid": "e72872277a33dcf6d5c1f7e31f68a632", "text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.", "title": "" }, { "docid": "64d72ffe736831266acde9726d6d039f", "text": "Recently, image caption which aims to generate a textual description for an image automatically has attracted researchers from various fields. Encouraging performance has been achieved by applying deep neural networks. Most of these works aim at generating a single caption which may be incomprehensive, especially for complex images. This paper proposes a topic-specific multi-caption generator, which infer topics from image first and then generate a variety of topic-specific captions, each of which depicts the image from a particular topic. We perform experiments on flickr8k, flickr30k and MSCOCO. The results show that the proposed model performs better than single-caption generator when generating topic-specific captions. The proposed model effectively generates diversity of captions under reasonable topics and they differ from each other in topic level.", "title": "" }, { "docid": "e83ad9ba6d0d134b9691714fcdfe165e", "text": "With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.", "title": "" }, { "docid": "95b2219dc34de9f0fe40e84e6df8a1e3", "text": "Most computer vision and especially segmentation tasks require to extract features that represent local appearance of patches. Relevant features can be further processed by learning algorithms to infer posterior probabilities that pixels belong to an object of interest. Deep Convolutional Neural Networks (CNN) define a particularly successful class of learning algorithms for semantic segmentation, although they proved to be very slow to train even when employing special purpose hardware. We propose, for the first time, a general purpose segmentation algorithm to extract the most informative and interpretable features as convolution kernels while simultaneously building a multivariate decision tree. The algorithm trains several orders of magnitude faster than regular CNNs and achieves state of the art results in processing quality on benchmark datasets.", "title": "" }, { "docid": "7897f052c891e330988296e3d6306c39", "text": "Sleep quality is an important factor for human physical and mental health, day-time performance, and safety. Sufficient sleep quality can reduce risk of chronic disease and mental depression. Sleep helps brain to work properly that can improve productivity and prevent accident because of falling asleep. In order to analyze the sleep quality, reliable continuous monitoring system is required. The emergence of internet-of-things technology has provided a promising opportunity to build a reliable sleep quality monitoring system by leveraging the rapid improvement of sensor and mobile technology. This paper presents the literature study about internet of things for sleep quality monitoring systems. The study is started from the review of sleep quality problem, the importance of sleep quality monitoring, the enabling internet of things technology, and the open issues in this field. Finally, our future research plan for sleep apnea monitoring is presented.", "title": "" }, { "docid": "0bf3c08b71fedd629bdc584c3deeaa34", "text": "Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.", "title": "" }, { "docid": "df0254b9a2d1aad1113444ac00c9b7a4", "text": "Bad code smells have been defined as indicators of potential problems in source code. Techniques to identify and mitigate bad code smells have been proposed and studied. Recently bad test code smells (test smells for short) have been put forward as a kind of bad code smell specific to tests such a unit tests. What has been missing is empirical investigation into the prevalence and impact of bad test code smells. Two studies aimed at providing this missing empirical data are presented. The first study finds that there is a high diffusion of test smells in both open source and industrial software systems with 86 % of JUnit tests exhibiting at least one test smell and six tests having six distinct test smells. The second study provides evidence that test smells have a strong negative impact on program comprehension and maintenance. Highlights from this second study include the finding that comprehension is 30 % better in the absence of test smells.", "title": "" }, { "docid": "c07c69bf5e2fce6f9944838ce80b5b8c", "text": "Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never-seen-before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet-loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single-click image segmentation algorithm to demonstrate the power of our method.", "title": "" }, { "docid": "49158096fea4e317ac6e01e8ab9d0faf", "text": "The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately “knows when it does not know,” and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.", "title": "" }, { "docid": "53562dbb7087c83c6c84875e5e784b1b", "text": "ALIZE is an open-source platform for speaker recognition. The ALIZE library implements a low-level statistical engine based on the well-known Gaussian mixture modelling. The toolkit includes a set of high level tools dedicated to speaker recognition based on the latest developments in speaker recognition such as Joint Factor Analysis, Support Vector Machine, i-vector modelling and Probabilistic Linear Discriminant Analysis. Since 2005, the performance of ALIZE has been demonstrated in series of Speaker Recognition Evaluations (SREs) conducted by NIST and has been used by many participants in the last NISTSRE 2012. This paper presents the latest version of the corpus and performance on the NIST-SRE 2010 extended task.", "title": "" }, { "docid": "ec1da767db4247990c26f97483f1b9e1", "text": "We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected by directed, labelled edges, and property graphs, where nodes and edges can further have attributes. Next we discuss the two most fundamental graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter, we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed and how they can be combined with graph patterns. We also discuss several semantics under which queries using the previous features can be evaluated, what effects the selection of features and semantics has on complexity, and offer examples of such features in three modern languages that are used to query graphs: SPARQL, Cypher, and Gremlin. We conclude by discussing the importance of formalisation for graph query languages; a summary of what is known about SPARQL, Cypher, and Gremlin in terms of expressivity and complexity; and an outline of possible future directions for the area.", "title": "" }, { "docid": "1f1939221681e3597a6731903cc4b235", "text": "The Internet of Things (IoT) embodies a wide spectrum of machines ranging from sensors powered by 8-bits microcontrollers, to devices powered by processors roughly equivalent to those found in entry-level smartphones. Neither traditional operating systems (OS) currently running on internet hosts, nor typical OS for sensor networks are capable to fulfill all at once the diverse requirements of such a wide range of devices. Hence, in order to avoid redundant developments and maintenance costs of IoT products, a novel, unifying type of OS is needed. The following analyzes requirements such an OS should fulfill, and introduces RIOT, a new OS satisfying these demands. Key-words: Network, internet, things, objects, IoT, routing, OS, energy, efficient, operating system, protocol, IPv6, wireless, radio, constrained, embedded RIOT: One OS to Rule Them All in the IoT 3", "title": "" }, { "docid": "6934b06f35dc7855a8410329b099ca2f", "text": "Privacy protection in publishing transaction data is an important problem. A key feature of transaction data is the extreme sparsity, which renders any single technique ineffective in anonymizing such data. Among recent works, some incur high information loss, some result in data hard to interpret, and some suffer from performance drawbacks. This paper proposes to integrate generalization and suppression to reduce information loss. However, the integration is non-trivial. We propose novel techniques to address the efficiency and scalability challenges. Extensive experiments on real world databases show that this approach outperforms the state-of-the-art methods, including global generalization, local generalization, and total suppression. In addition, transaction data anonymized by this approach can be analyzed by standard data mining tools, a property that local generalization fails to provide.", "title": "" }, { "docid": "c7453c6707e3e5b987531ca0114cfc92", "text": "The aim of this paper is to present a fully integrated solution for synchronous motor control. The implemented controller is based on Actel Fusion field-programmable gate array (FPGA). The objective of this paper is to evaluate the ability of the proposed fully integrated solution to ensure all the required performances in such applications, particularly in terms of control quality and time/area performances. To this purpose, a current control algorithm of a permanent-magnet synchronous machine has been implemented. This machine is associated with a resolver position sensor. In addition to the current control closed loop, all the necessary motor control tasks are implemented in the same device. The analog-to-digital conversion is ensured by the integrated analog-to-digital converter (ADC), avoiding the use of external converters. The resolver processing unit, which computes the rotor position and speed from the resolver signals, is implemented in the FPGA matrix, avoiding the use of external resolver-to-digital converter (RDC). The sine patterns used for the Park transformation are stored in the integrated flash memory blocks.", "title": "" } ]
scidocsrr
7111023327137f62b82d1b0c9acd840e
SecureDroid: Enhancing Security of Machine Learning-based Detection against Adversarial Android Malware Attacks
[ { "docid": "67e85e8b59ec7dc8b0019afa8270e861", "text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.", "title": "" }, { "docid": "9a65a5c09df7e34383056509d96e772d", "text": "With explosive growth of Android malware and due to its damage to smart phone users (e.g., stealing user credentials, resource abuse), Android malware detection is one of the cyber security topics that are of great interests. Currently, the most significant line of defense against Android malware is anti-malware software products, such as Norton, Lookout, and Comodo Mobile Security, which mainly use the signature-based method to recognize threats. However, malware attackers increasingly employ techniques such as repackaging and obfuscation to bypass signatures and defeat attempts to analyze their inner mechanisms. The increasing sophistication of Android malware calls for new defensive techniques that are harder to evade, and are capable of protecting users against novel threats. In this paper, we propose a novel dynamic analysis method named Component Traversal that can automatically execute the code routines of each given Android application (app) as completely as possible. Based on the extracted Linux kernel system calls, we further construct the weighted directed graphs and then apply a deep learning framework resting on the graph based features for newly unknown Android malware detection. A comprehensive experimental study on a real sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method outperforms other alternative Android malware detection techniques. Our developed system Deep4MalDroid has also been integrated into a commercial Android anti-malware software.", "title": "" }, { "docid": "39710768ed8ec899e412cccae7e7d262", "text": "Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an arbitrary learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.", "title": "" } ]
[ { "docid": "0c3eae28505f1bc8835e118d70bc3367", "text": "Recent research [3,37,38] has proposed compute accelerators to address the energy efficiency challenge. While these compute accelerators specialize and improve the compute efficiency, they have tended to rely on address-based load/store memory interfaces that closely resemble a traditional processor core. The address-based load/store interface is particularly challenging in data-centric applications that tend to access different software data structures. While accelerators optimize the compute section, the address-based interface leads to wasteful instructions and low memory level parallelism (MLP). We study the benefits of raising the abstraction of the memory interface to data structures.\n We propose DASX (Data Structure Accelerator), a specialized state machine for data fetch that enables compute accelerators to efficiently access data structure elements in iterative program regions. DASX enables the compute accelerators to employ data structure based memory operations and relieves the compute unit from having to generate addresses for each individual object. DASX exploits knowledge of the program's iteration to i) run ahead of the compute units and gather data objects for the compute unit (i.e., compute unit memory operations do not encounter cache misses) and ii) throttle the fetch rate, adaptively tile the dataset based on the locality characteristics and guarantee cache residency. We demonstrate accelerators for three types of data structures, Vector, Key-Value (Hash) maps, and BTrees. We demonstrate the benefits of DASX on data-centric applications which have varied compute kernels but access few regular data structures. DASX achieves higher energy efficiency by eliminating data structure instructions and enabling energy efficient compute accelerators to efficiently access the data elements. We demonstrate that DASX can achieve 4.4x the performance of a multicore system by discovering more parallelism from the data structure.", "title": "" }, { "docid": "d565270afe051fd6b385fea75023b91b", "text": "AIM\nTo document the clinicopathological characteristics and analyze the possible reasons for misdiagnosis or missed diagnosis of hepatoid adenocarcinoma of the stomach (HAS), using data from a single center.\n\n\nMETHODS\nWe retrospectively analyzed 19 patients initially diagnosed as HAS and 7 patients initially diagnosed as common gastric cancer with high levels of serum α-fetoprotein (AFP). All had undergone surgical treatment, except 3 patients only had biopsies at our hospital. Immunohistochemistry for AFP and Hepatocyte antigen was performed. Final diagnosis for these 26 patients were made after HE and immunohistochemistry slides reviewed by 2 experienced pathologists. Prognostic factors were determined by univariate analysis.\n\n\nRESULTS\nNineteen cases were confirmed to be HAS. A total of 4 out of 19 cases initially diagnosed as HAS and 4 out of 7 cases initially diagnosed as common gastric adenocarcinoma were misdiagnosed/missed diagnosed, thus, the misdiagnosis/missed diagnosis rate was 30.8% (8/26). The incidence of HAS among gastric cancer in our center was 0.19% (19/9915). Sixteen (84.2%) patients showed T stages greater than T2, 12 (70.6%) patients had positive lymph nodes in 17 available patients and 3 (15.8%) of the patients with tumors presented liver metastasis at the time of diagnosis. Histologically, cytoplasmic staining types included 10 cases of eosinophilic, 1 case of clear, 5 cases of clear mixed with eosinophilic and 3 cases of basophilic. Fourteen (73.7%) patients expressed AFP, whereas only 6 (31.6%) were hepatocyte-positive. Univariate analysis showed that N stage (HR 2.429, P=0.007) and tumor AFP expression (HR 0.428, P=0.036) were significantly associated with disease-free survival. The median overall survival time was 12.0 months, and the median disease-free survival time was 7.0 months. Four (80%) of 5 N0 patients and 2 (50%) of 4 N1 patients survived without progression, but no N2-3 patients survived.\n\n\nCONCLUSION\nHAS remains easily being misdiagnosed/missed diagnosed based on a pathological examination, probably because the condition is rare and has various cytoplasmic types. Although the survival rate for HAS is poor, a curative effect may be achieved for N0 or N1 cases.", "title": "" }, { "docid": "7042ad365745cdb2139cf4a93cd18197", "text": "Training a deep architecture using a ranking loss has become standard for the person re-identification task. Increasingly, these deep architectures include additional components that leverage part detections, attribute predictions, pose estimators and other auxiliary information, in order to more effectively localize and align discriminative image regions. In this paper we adopt a different approach and carefully design each component of a simple deep architecture and, critically, the strategy for training it effectively for person re-identification. We extensively evaluate each design choice, leading to a list of good practices for person re-identification. By following these practices, our approach outperforms the state of the art, including more complex methods with auxiliary components, by large margins on four benchmark datasets. We also provide a qualitative analysis of our trained representation which indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism.", "title": "" }, { "docid": "b56467b5761a1294bb2b1739d6504ef2", "text": "This paper presents the creation of a robot capable of drawing artistic portraits. The application is purely entertaining and based on existing tools for face detection and image reconstruction, as well as classical tools for trajectory planning of a 4 DOFs robot arm. The innovation of the application lies in the care we took to make the whole process as human-like as possible. The robot's motions and its drawings follow a style characteristic to humans. The portraits conserve the esthetics features of the original images. The whole process is interactive, using speech recognition and speech synthesis to conduct the scenario", "title": "" }, { "docid": "d4c7efe10b1444d0f9cb6032856ba4e1", "text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.", "title": "" }, { "docid": "0430eb44d0701c86cb9a2405b6b49c4e", "text": "The advent of online social networks has been one of the most exciting events in this decade. Many popular online social networks such as Twitter, LinkedIn, and Facebook have become increasingly popular. In addition, a number of multimedia networks such as Flickr have also seen an increasing level of popularity in recent years. Many such social networks are extremely rich in content, and they typically contain a tremendous amount of content and linkage data which can be leveraged for analysis. The linkage data is essentially the graph structure of the social network and the communications between entities; whereas the content data contains the text, images and other multimedia data in the network. The richness of this network provides unprecedented opportunities for data analytics in the context of social networks. This book provides a data-centric view of online social networks; a topic which has been missing from much of the literature. This chapter provides an overview of the key topics in this field, and their coverage in this book.", "title": "" }, { "docid": "8ed1f9194914b5529b4e89444b5feb45", "text": "support for the camera interfaces. My colleagues Felix Woelk and Kevin Köser I would like to thank for many fruitful discussions. I thank our system administrator Torge Storm for always fixing my machine and providing enough data space for all my sequences which was really a hard job. Of course I also would like to thank the other members of the group Jan Woetzel, Daniel Grest, Birger Streckel and Renate Staecker for their help, the discussions and providing the exciting working environment. Last but not least, I would like to express my gratitude to my wife Miriam for always supporting me and my work. I also want to thank my sons Joshua and Noah for suffering under my paper writing. Finally I thank my parents for always supporting my education and my work.", "title": "" }, { "docid": "10df94151399b5fce9a55d4c86bbc374", "text": "Do-it-yourself (DIY)-style smart home products enable users to create their own smart homes by installing sensors and actuators. DIY smart home products are a potential solution to current problems related to home automation products, such as inflexible user controls and high costs of installation. Although the expected user experience of DIY smart home products is different from that of previous home automation products, research on DIY smart home products is still in its early stages. In this paper, we report a 3-week in situ observational study involving eight households. The results suggest six stages of the DIY smart home usage cycle and design implications for improving the user experience of DIY smart home products.", "title": "" }, { "docid": "7e5c3e774572e59180637da0d3b2d71a", "text": "Relationship marketing—establishing, developing, and maintaining successful relational exchanges—constitutes a major shift in marketing theory and practice. After conceptualizing relationship marketing and discussing its ten forms, the authors (1) theorize that successful relationship marketing requires relationship commitment and tnjst, (2) model relationship commitment and trust as key mediating variables, (3) test this key mediating variable model using data from automobile tire retailers, and (4) compare their model with a rival that does not allow relationship commitment and trust to function as mediating variables. Given the favorable test results for the key mediating variable model, suggestions for further explicating and testing it are offered.", "title": "" }, { "docid": "265b352775956004436b438574ee2d91", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "7768c834a837d8f02ce91c4949f87d59", "text": "Gamified systems benefit from various gamification-elements to motivate users and encourage them to persist in their quests towards a goal. This paper proposes a categorization of gamification-elements and learners' motivation type to enrich a learning management system with the advantages of personalization and gamification. This categorization uses the learners' motivation type to assign gamification-elements in learning environments. To find out the probable relations between gamification-elements and learners' motivation type, a field-research is done to measure learners' motivation along with their interests in gamification-elements. Based on the results of this survey, all the gamification-elements are categorized according to related motivation types, which form our proposed categorization. To investigate the effects of this personalization approach, a gamified learning management system is prepared. Our implemented system is evaluated in Technical English course at University of Tehran. Our experimental results on the average participation rate show the effectiveness of the personalization approach on the learners' motivation. Based on the paper findings, we suggest an integrated categorization of gamification-elements and learners' motivation type, which can further enhance the learners' motivation through personalization.", "title": "" }, { "docid": "47c09f7228617da85170ea5c34d9feb2", "text": "Machine learning techniques have deeply rooted in our everyday life. However, since it is knowledgeand labor-intensive to pursue good learning performance, humans are heavily involved in every aspect of machine learning. To make machine learning techniques easier to apply and reduce the demand for experienced human experts, automated machine learning (AutoML) has emerged as a hot topic with both industrial and academic interest. In this paper, we provide an up to date survey on AutoML. First, we introduce and define the AutoML problem, with inspiration from both realms of automation and machine learning. Then, we propose a general AutoML framework that not only covers most existing approaches to date, but also can guide the design for new methods. Subsequently, we categorize and review the existing works from two aspects, i.e., the problem setup and the employed techniques. The proposed framework and taxonomies provide a detailed analysis of AutoML approaches and explain the reasons underneath their successful applications. We hope this survey can serve as not only an insightful guideline for AutoML beginners but also an inspiration for future research.", "title": "" }, { "docid": "c71d27d4e4e9c85e3f5016fa36d20a16", "text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.", "title": "" }, { "docid": "48568865b27e8edb88d4683e702dd4f8", "text": "This study investigates how individuals process an online product review when an avatar is included to represent the peer reviewer. The researchers predicted that both perceived avatar and textual credibility would have a positive influence on perceptions of source trustworthiness and the data supported this prediction. Expectancy violations theory also predicted that discrepancies between the perceived avatar and textual credibility would produce violations. Violations were statistically captured using a residual analysis. The results of this research ultimately demonstrated that discrepancies in perceived avatar and textual credibility can have a significant impact on perceptions of source trustworthiness. These findings suggest that predicting perceived source trustworthiness in an online consumer review setting goes beyond the linear effects of avatar and textual credibility.", "title": "" }, { "docid": "1dd4a95adcd4f9e7518518148c3605ac", "text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.", "title": "" }, { "docid": "23bbd88d88de6b158cd89b1655216b86", "text": "This paper presents a novel algorithmic method for automatically generating personal handwriting styles of Chinese characters through an example-based approach. The method first splits a whole Chinese character into multiple constituent parts, such as strokes, radicals, and frequent character components. The algorithm then analyzes and learns the characteristics of character handwriting styles both defined in the Chinese national font standard and those exhibited in a person's own handwriting records. In such an analysis process, we adopt a parametric representation of character shapes and also examine the spatial relationships between multiple constituent components of a character. By imitating shapes of individual character components as well as the spatial relationships between them, the proposed method can automatically generate personalized handwritings following an example-based approach. To explore the quality of our automatic generation algorithm, we compare the computer generated results with the authentic human handwriting samples, which appear satisfying for entertainment or mobile applications as agreed by Chinese subjects in our user study.", "title": "" }, { "docid": "bbfb507a6791ca4f703f70e52d5d760f", "text": "Cooperative adaptive cruise control (CACC) allows for short-distance automatic vehicle following using intervehicle wireless communication in addition to onboard sensors, thereby potentially improving road throughput. In order to fulfill performance, safety, and comfort requirements, a CACC-equipped vehicle platoon should be string stable, attenuating the effect of disturbances along the vehicle string. Therefore, a controller design method is developed that allows for explicit inclusion of the string stability requirement in the controller synthesis specifications. To this end, the notion of string stability is introduced first, and conditions for L2 string stability of linear systems are presented that motivate the development of an H∞ controller synthesis approach for string stability. The potential of this approach is illustrated by its application to the design of controllers for CACC for one- and two-vehicle look-ahead communication topologies. As a result, L2 string-stable platooning strategies are obtained in both cases, also revealing that the two-vehicle look-ahead topology is particularly effective at a larger communication delay. Finally, the results are experimentally validated using a platoon of three passenger vehicles, illustrating the practical feasibility of this approach.", "title": "" }, { "docid": "51b327f1845e10be2d5ef0b23979b333", "text": "Planning in large partially observable Markov decision processes (POMDPs) is challenging especially when a long planning horizon is required. A few recent algorithms successfully tackle this case but at the expense of a weaker information-gathering capacity. In this paper, we propose Information Gathering and Reward Exploitation of Subgoals (IGRES), a randomized POMDP planning algorithm that leverages information in the state space to automatically generate “macro-actions” to tackle tasks with long planning horizons, while locally exploring the belief space to allow effective information gathering. Experimental results show that IGRES is an effective multi-purpose POMDP solver, providing state-of-the-art performance for both long horizon planning tasks and information-gathering tasks on benchmark domains. Additional experiments with an ecological adaptive management problem indicate that IGRES is a promising tool for POMDP planning in real-world settings.", "title": "" }, { "docid": "53142f7afb27dd14ed28228014661658", "text": "BACKGROUND\nNodular hidradenoma is an uncommon, benign, adnexal neoplasm of apocrine origin which is a clinical simulator of other tumours.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the morphological findings of a large series of nodular hidradenomas under dermoscopic observation.\n\n\nMETHODS\nDermoscopic examination of 28 cases of nodular hidradenomas was performed to evaluate specific dermoscopic criteria and patterns.\n\n\nRESULTS\nThe most frequently occurring dermoscopic features were: (1) in 96.4% of cases, a homogeneous area that covered the lesion partially or totally, the colour of which was pinkish in 46.4% of cases, bluish in 28.6%, red-blue in 14.3%, and brownish in 10.7%; (2) white structures were found in 89.3% of cases; (3) in 82.1% of cases, vascular structures were also observed, especially arborising telangiectasias (39.3%) and polymorphous atypical vessels (28.6%).\n\n\nCONCLUSION\nNodular hidradenomas represent a dermoscopic pitfall, being difficult to differentiate clinically and dermoscopically from basal cell carcinomas and melanomas.", "title": "" } ]
scidocsrr
25df906ad683846900e27d80f8e83b81
Your click decides your fate: Leveraging clickstream patterns in MOOC videos to infer students' information processing and attrition behavior
[ { "docid": "74eb19a956a8910fbfd50090fb04946c", "text": "In this paper, we explore student dropout behavior in Massive Open Online Courses(MOOC). We use as a case study a recent Coursera class from which we develop a survival model that allows us to measure the influence of factors extracted from that data on student dropout rate. Specifically we explore factors related to student behavior and social positioning within discussion forums using standard social network analytic techniques. The analysis reveals several significant predictors of dropout.", "title": "" } ]
[ { "docid": "b85a6286ca2fb14a9255c9d70c677de3", "text": "0140-3664/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.comcom.2013.01.009 q The research leading to these results has been conducted in the SAIL project and received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant Agreement No. 257448. ⇑ Corresponding author. Tel.: +49 5251 60 5385; fax: +49 5251 60 5377. E-mail addresses: cdannewitz@upb.de (C. Dannewitz), Dirk.Kutscher@neclab.eu (D. Kutscher), Borje.Ohlman@ericsson.com (B. Ohlman), stephen.farrell@cs.tcd.ie (S. Farrell), bengta@sics.se (B. Ahlgren), hkarl@upb.de (H. Karl). 1 <http://www.cisco.com/web/solutions/sp/vni/vni_mobile_forecast_highlights/ index.html>. Christian Dannewitz , Dirk Kutscher b,⇑, Börje Ohlman , Stephen Farrell , Bengt Ahlgren , Holger Karl a", "title": "" }, { "docid": "fd85e1c686c1542920dff1f0e323ed33", "text": "This index covers all technical items - papers, correspondence, reviews, etc. - that appeared in this periodical during the year, and items from previous years that were commented upon or corrected in this year. Departments and other items may also be covered if they have been judged to have archival value. The Author Index contains the primary entry for each item, listed under the first author's name. The primary entry includes the co-authors' names, the title of the paper or other item, and its location, specified by the publication abbreviation, year, month, and inclusive pagination. The Subject Index contains entries describing the item under all appropriate subject headings, plus the first author's name, the publication abbreviation, month, and year, and inclusive pages. Note that the item title is found only under the primary entry in the Author Index.", "title": "" }, { "docid": "4d2666a8aa228041895a631a83236780", "text": "Dermoscopy is a method of increasing importance in the diagnoses of cutaneous diseases. On the scalp, dermoscopic aspects have been described in psoriasis, lichen planus, seborrheic dermatitis and discoid lupus. We describe the \"comma\" and \"corkscrew hair\" dermoscopic aspects found in a child of skin type 4, with tinea capitis.", "title": "" }, { "docid": "3e3953e09f35c418316370f2318550aa", "text": "Poker is ideal for testing automated reason­ ing under uncertainty. It introduces un­ certainty both by physical randomization and by incomplete information about op­ ponents' hands. Another source of uncer­ tainty is the limited information available to construct psychological models of opponents, their tendencies to bluff, play conservatively, reveal weakness, etc. and the relation be­ tween their hand strengths and betting be­ haviour. All of these uncertainties must be assessed accurately and combined effectively for any reasonable level of skill in the game to be achieved, since good decision making is highly sensitive to those tasks. We de­ scribe our Bayesian Poker Program (BPP) , which uses a Bayesian network to model the program's poker hand, the opponent's hand and the opponent's playing behaviour con­ ditioned upon the hand, and betting curves which govern play given a probability of win­ ning. The history of play with opponents is used to improve BPP's understanding of their behaviour. We compare BPP experimentally with: a simple rule-based system; a program which depends exclusively on hand probabil­ ities (i.e., without opponent modeling); and with human players. BPP has shown itself to be an effective player against all these opponents, barring the better humans. We also sketch out some likely ways of improv­ ing play.", "title": "" }, { "docid": "0890227418a3fca80f280f9fa810f6a3", "text": "OBJECTIVE\nTo update the likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan.\n\n\nMETHODS\nUltrasound examination of the fetal profile was carried out and the presence or absence of the nasal bone was noted immediately before karyotyping in 5918 fetuses at 11 to 13+6 weeks. Logistic regression analysis was used to examine the effect of maternal ethnic origin and fetal crown-rump length (CRL) and nuchal translucency (NT) on the incidence of absent nasal bone in the chromosomally normal and trisomy 21 fetuses.\n\n\nRESULTS\nThe fetal profile was successfully examined in 5851 (98.9%) cases. In 5223/5851 cases the fetal karyotype was normal and in 628 cases it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related first to the ethnic origin of the mother, being 2.2% for Caucasians, 9.0% for Afro-Caribbeans and 5.0% for Asians; second to fetal CRL, being 4.7% for CRL of 45-54 mm, 3.4% for CRL of 55-64 mm, 1.4% for CRL of 65-74 mm and 1% for CRL of 75-84 mm; and third to NT, being 1.6% for NT < or = 95th centile, 2.7% for NT > 95th centile-3.4 mm, 5.4% for NT 3.5-4.4 mm, 6% for NT 4.5-5.4 mm and 15% for NT > or = 5.5 mm. In the chromosomally abnormal group there was absent nasal bone in 229/333 (68.8%) cases with trisomy 21 and in 95/295 (32.2%) cases with other chromosomal defects. Logistic regression analysis demonstrated that in the chromosomally normal fetuses significant independent prediction of the likelihood of absent nasal bone was provided by CRL, NT and Afro-Caribbean ethnic group, and in the trisomy 21 fetuses by CRL and NT. The likelihood ratio for trisomy 21 for absent nasal bone was derived by dividing the likelihood in trisomy 21 by that in normal fetuses.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT and ethnic origin.", "title": "" }, { "docid": "d6f9a361208c90560344742e3fb77fa6", "text": "Lifecycle management enables enterprises to manage their products, services and product-service bundles. IoT and CPS have made products and services smarter by closing the loop of data across different phases of lifecycle. Similarly, CPS and IoT empower cities with real-time data streams from heterogeneous objects. Yet, cities are smarter and more powerful when relevant data can be exchanged between different systems across different domains. From engineering perspective, smart city can be seen as a System of Systems composed of interrelated/ interdependent smart systems and objects. To better integrate people, processes, and systems in the smart city ecosystem, this paper discusses the use of Lifecycle Management in the smart city context. Considering the differences between ordinary and smart service systems, this paper seeks better understanding of lifecycle aspects in the smart city context. For better understanding, some of the discussed lifecycle aspects are demonstrated in a smart parking use-case.", "title": "" }, { "docid": "68a84156f64d4d1926a52d60fc3eadf3", "text": "Parkinson's disease is a common and disabling disorder of movement owing to dopaminergic denervation of the striatum. However, it is still unclear how this denervation perverts normal functioning to cause slowing of voluntary movements. Recent work using tissue slice preparations, animal models and in humans with Parkinson's disease has demonstrated abnormally synchronized oscillatory activity at multiple levels of the basal ganglia-cortical loop. This excessive synchronization correlates with motor deficit, and its suppression by dopaminergic therapies, ablative surgery or deep-brain stimulation might provide the basic mechanism whereby diverse therapeutic strategies ameliorate motor impairment in patients with Parkinson's disease. This review is part of the INMED/TINS special issue, Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).", "title": "" }, { "docid": "3f268b6048d534720cac533f04c2aa7e", "text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.", "title": "" }, { "docid": "6ac6e57937fa3d2a8e319ce17d960c34", "text": "In various application domains there is a desire to compare process models, e.g., to relate an organization-specific process model to a reference model, to find a web service matching some desired service description, or to compare some normative process model with a process model discovered using process mining techniques. Although many researchers have worked on different notions of equivalence (e.g., trace equivalence, bisimulation, branching bisimulation, etc.), most of the existing notions are not very useful in this context. First of all, most equivalence notions result in a binary answer (i.e., two processes are equivalent or not). This is not very helpful, because, in real-life applications, one needs to differentiate between slightly different models and completely different models. Second, not all parts of a process model are equally important. There may be parts of the process model that are rarely activated while other parts are executed for most process instances. Clearly, these should be considered differently. To address these problems, this paper proposes a completely new way of comparing process models. Rather than directly comparing two models, the process models are compared with respect to some typical behavior. This way we are able to avoid the two problems. Although the results are presented in the context of Petri nets, the approach can be applied to any process modeling language with executable semantics.", "title": "" }, { "docid": "e640d487052b9399bea6c0d06ce189b0", "text": "We propose a novel deep supervised neural network for the task of action recognition in videos, which implicitly takes advantage of visual tracking and shares the robustness of both deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed to suppress noise from background jitters. Specifically, we firstly extract multi-level deep features from deep CNNs and feed them into 3dconvolutional network. After that we feed those feature cubes into our novel joint LSTM module to predict labels and to generate attention regularization. We evaluate our model on two challenging datasets: UCF101 and HMDB51. The results show that our model achieves the state-of-art by only using convolutional features.", "title": "" }, { "docid": "d22c69d0c546dfb4ee5d38349bf7154f", "text": "Investigation of functional brain connectivity patterns using functional MRI has received significant interest in the neuroimaging domain. Brain functional connectivity alterations have widely been exploited for diagnosis and prediction of various brain disorders. Over the last several years, the research community has made tremendous advancements in constructing brain functional connectivity from timeseries functional MRI signals using computational methods. However, even modern machine learning techniques rely on conventional correlation and distance measures as a basic step towards the calculation of the functional connectivity. Such measures might not be able to capture the latent characteristics of raw time-series signals. To overcome this shortcoming, we propose a novel convolutional neural network based model, FCNet, that extracts functional connectivity directly from raw fMRI time-series signals. The FCNet consists of a convolutional neural network that extracts features from time-series signals and a fully connected network that computes the similarity between the extracted features in a Siamese architecture. The functional connectivity computed using FCNet is combined with phenotypic information and used to classify individuals as healthy controls or neurological disorder subjects. Experimental results on the publicly available ADHD-200 dataset demonstrate that this innovative framework can improve classification accuracy, which indicates that the features learnt from FCNet have superior discriminative power.", "title": "" }, { "docid": "0879399fcb38c103a0e574d6d9010215", "text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.", "title": "" }, { "docid": "6115bdfac611e5e2423784118471fbd8", "text": "We address the problem of minimizing the communication involved in the exchange of similar documents. We consider two users, A and B, who hold documents x and y respectively. Neither of the users has any information about the other’s document. They exchange messages so that B computes x; it may be required that A compute y as well. Our goal is to design communication protocols with the main objective of minimizing the total number of bits they exchange; other objectives are minimizing the number of rounds and the complexity of internal computations. An important notion which determines the efficiency of the protocols is how one measures the distance between x and y. We consider several metrics for measuring this distance, namely the Hamming metric, the Levenshtein metric (edit distance), and a new LZ metric, which is introduced in this paper. We show how to estimate the distance between x and y using a single message of logarithmic size. For each metric, we present the first communication-efficient protocols, which often match the corresponding lower bounds. In consequence, we obtain error-correcting codes for these error models which correct up to d errors in n characters using O(d·polylog(n)) bits. Our most interesting methods use a new transformation from LZ distance to Hamming distance.", "title": "" }, { "docid": "76d1509549ba64157911e6b723f6ebc5", "text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).", "title": "" }, { "docid": "34976e12739060a443ad0cfbb373fd3b", "text": "The detection of failures is a fundamental issue for fault-tolerance in distributed systems. Recently, many people have come to realize that failure detection ought to be provided as some form of generic service, similar to IP address lookup or time synchronization. However, this has not been successful so far; one of the reasons being the fact that classical failure detectors were not designed to satisfy several application requirements simultaneously. We present a novel abstraction, called accrual failure detectors, that emphasizes flexibility and expressiveness and can serve as a basic building block to implementing failure detectors in distributed systems. Instead of providing information of a binary nature (trust vs. suspect), accrual failure detectors output a suspicion level on a continuous scale. The principal merit of this approach is that it favors a nearly complete decoupling between application requirements and the monitoring of the environment. In this paper, we describe an implementation of such an accrual failure detector, that we call the /spl phi/ failure detector. The particularity of the /spl phi/ failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We analyzed the behavior of our /spl phi/ failure detector over an intercontinental communication link over a week. Our experimental results show that if performs equally well as other known adaptive failure detection mechanisms, with an improved flexibility.", "title": "" }, { "docid": "aad7697ce9d9af2b49cd3a46e441ef8e", "text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.", "title": "" }, { "docid": "0d8b2997f10319da3d59ec35731c8e85", "text": "In this paper, we study the performance of the IEEE 802.11 MAC protocol under a range of jammers that covers both channel-oblivious and channel-aware jamming. We study two channel-oblivious jammers: a periodic jammer that jams deterministically at a specified rate, and a memoryless jammer whose signals arrive according to a Poisson process. We also develop new models for channel-aware jamming, including a reactive jammer that only jams non-colliding transmissions and an omniscient jammer that optimally adjusts its strategy according to current states of the participating nodes. Our study comprises of a theoretical analysis of the saturation throughput of 802.11 under jamming, an extensive simulation study, and a testbed to conduct real world experimentation of jamming IEEE 802.11 using GNU Radio and USRP platform. In our theoretical analysis, we use a discrete-time Markov chain analysis to derive formulae for the saturation throughput of IEEE 802.11 under memoryless, reactive and omniscient jamming. One of our key results is a characterization of optimal omniscient jamming that establishes a lower bound on the saturation throughput of 802.11 under arbitrary jammer attacks. We validate the theoretical analysis by means of Qualnet simulations. Finally, we measure the real-world performance of periodic and memoryless jammers using our GNU radio jammer prototype.", "title": "" }, { "docid": "e96f455aa2c82d358eb94c72d93c8b03", "text": "OBJECTIVE\nTo evaluate the effects of mirror therapy on upper-extremity motor recovery, spasticity, and hand-related functioning of inpatients with subacute stroke.\n\n\nDESIGN\nRandomized, controlled, assessor-blinded, 4-week trial, with follow-up at 6 months.\n\n\nSETTING\nRehabilitation education and research hospital.\n\n\nPARTICIPANTS\nA total of 40 inpatients with stroke (mean age, 63.2y), all within 12 months poststroke.\n\n\nINTERVENTIONS\nThirty minutes of mirror therapy program a day consisting of wrist and finger flexion and extension movements or sham therapy in addition to conventional stroke rehabilitation program, 5 days a week, 2 to 5 hours a day, for 4 weeks.\n\n\nMAIN OUTCOME MEASURES\nThe Brunnstrom stages of motor recovery, spasticity assessed by the Modified Ashworth Scale (MAS), and hand-related functioning (self-care items of the FIM instrument).\n\n\nRESULTS\nThe scores of the Brunnstrom stages for the hand and upper extremity and the FIM self-care score improved more in the mirror group than in the control group after 4 weeks of treatment (by 0.83, 0.89, and 4.10, respectively; all P<.01) and at the 6-month follow-up (by 0.16, 0.43, and 2.34, respectively; all P<.05). No significant differences were found between the groups for the MAS.\n\n\nCONCLUSIONS\nIn our group of subacute stroke patients, hand functioning improved more after mirror therapy in addition to a conventional rehabilitation program compared with a control treatment immediately after 4 weeks of treatment and at the 6-month follow-up, whereas mirror therapy did not affect spasticity.", "title": "" }, { "docid": "d1c0b58fa78ecda169d3972eae870590", "text": "Power system stability is defined as an ability of the power system to reestablish the initial steady state or come into the new steady state after any variation of the system's operation value or after system´s breakdown. The stability and reliability of the electric power system is highly actual topic nowadays, especially in the light of recent accidents like splitting of UCTE system and north-American blackouts. This paper deals with the potential of the evaluation in term of transient stability of the electric power system within the defense plan and the definition of the basic criterion for the transient stability – Critical Clearing Time (CCT).", "title": "" }, { "docid": "05a93bfe8e245edbe2438a0dc7025301", "text": "Statistical machine translation (SMT) treats the translation of natural language as a machine learning problem. By examining many samples of human-produced translation, SMT algorithms automatically learn how to translate. SMT has made tremendous strides in less than two decades, and many popular techniques have only emerged within the last few years. This survey presents a tutorial overview of state-of-the-art SMT at the beginning of 2007. We begin with the context of the current research, and then move to a formal problem description and an overview of the four main subproblems: translational equivalence modeling, mathematical modeling, parameter estimation, and decoding. Along the way, we present a taxonomy of some different approaches within these areas. We conclude with an overview of evaluation and notes on future directions. This is a revised draft of a paper currently under review. The contents may change in later drafts. Please send any comments, questions, or corrections to alopez@cs.umd.edu. Feel free to cite as University of Maryland technical report UMIACS-TR-2006-47. The support of this research by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001, ONR MURI Contract FCPO.810548265, and Department of Defense contract RD-02-5700 is acknowledged.", "title": "" } ]
scidocsrr
c42c5771e04f1b74da606b8c9a40d0d3
Diagonal scaling in Douglas-Rachford splitting and ADMM
[ { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" } ]
[ { "docid": "fe3570c283fbf8b1f504e7bf4c2703a8", "text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.", "title": "" }, { "docid": "7250d1ea22aac1690799089d2ba1acd5", "text": "Music plays an important part in people’s lives to regulate their emotions throughout the day. We conducted an online user study to investigate how the emotional state relates to the use of emotionally laden music. We found among 359 participants that they in general prefer emotionally laden music that correspond with their emotional state. However, when looking at personality traits, different patterns emerged. We found that when in a negative emotional state, those who scored high on openness, extraversion, and agreeableness tend to cheer themselves up with happy music, while those who scored high on neuroticism tend to increase their worry with sad music. With our results we show general patterns of music usage, but also individual differences. Our results contribute to the improvement of applications such as recommender systems in order to provide tailored recommendations based on users’ personality and emotional state.", "title": "" }, { "docid": "74c39dfd176da58b264acfd7c2260821", "text": "This non-experimental, causal study related to examine and explore the relationships among electronic service quality, customer satisfaction, electronics recovery service quality, and customer loyalty for consumer electronics e-tailers. This study adopted quota and snowball sampling. A total of 121 participants completed the online survey. Out of seven hypotheses in this study, five were supported, whereas two were not supported. Findings indicated that electronic recovery service quality had positive effect on customer loyalty. However, findings indicated that electronic recovery service quality had no effect on perceived value and customer satisfaction. Findings also indicated that perceived value and customer satisfaction were two significant variables that mediated the relationships between electronic service quality and customer loyalty. Moreover, this study found that electronic service quality had no direct effect on customer satisfaction, but had indirect positive effects on customer satisfaction for consumer electronics e-tailers. In terms of practical implications, consumer electronics e-tailers' managers could formulate a competitive strategy based on the modified Electronic Customer Relationship Re-Establishment model to retain current customers and to enhance customer relationship management (CRM). The limitations and recommendations for future research were also included in this study.", "title": "" }, { "docid": "ef6d25f1fc67962876100301d8bdb6a5", "text": "The Strategic Information Systems Planning (SISP) process is critical for ensuring the effectiveness of the contribution of Information Technology (IT)/Information Systems (IS) to the organisation. A sophisticated SISP process can greatly increase the chances of positive planning outcomes. While effective IS capabilities are seen as crucial to an organisation’s ability to generate IT-enabled competitive advantages, there exists a gap in the understanding of the IS competencies which contribute to the forming of an effective SISP capability. In light of these gaps, this study investigates how do IS competencies impact the SISP process, and its outcomes? To address this question, a model for investigating the impact of IS collaboration and IS personnel competencies on the SISP process is proposed. Further research is planned to undertake a survey of top Australian organisations in industries characterised by high IT innovation and competition, to test the proposed model and hypotheses.", "title": "" }, { "docid": "4d089acf0f7e1bae074fc4d9ad8ee7e3", "text": "The consequences of exodontia include alveolar bone resorption and ultimately atrophy to basal bone of the edentulous site/ridges. Ridge resorption proceeds quickly after tooth extraction and significantly reduces the possibility of placing implants without grafting procedures. The aims of this article are to describe the rationale behind alveolar ridge augmentation procedures aimed at preserving or minimizing the edentulous ridge volume loss. Because the goal of these approaches is to preserve bone, exodontia should be performed to preserve as much of the alveolar process as possible. After severance of the supra- and subcrestal fibrous attachment using scalpels and periotomes, elevation of the tooth frequently allows extraction with minimal socket wall damage. Extraction sockets should not be acutely infected and be completely free of any soft tissue fragments before any grafting or augmentation is attempted. Socket bleeding that mixes with the grafting material seems essential for success of this procedure. Various types of bone grafting materials have been suggested for this purpose, and some have shown promising results. Coverage of the grafted extraction site with wound dressing materials, coronal flap advancement, or even barrier membranes may enhance wound stability and an undisturbed healing process. Future controlled clinical trials are necessary to determine the ideal regimen for socket augmentation.", "title": "" }, { "docid": "553980e1d2432d1d27f84f8edcfc81bc", "text": "The home of the future should be a smart one, to support us in our daily life. Up to now only a few security incidents in that area are known. Depending on different security analyses, this fact is rather a result of the low spread of Smart Home products than the success of such systems security. Given that Smart Homes become more and more popular, we will consider current incidents and analyses to estimate potential security threats in the future. The definitions of a Smart Home drift widely apart. Thus we first need to define Smart Home for ourselves and additionally provide a way to categorize the big mass of products into smaller groups.", "title": "" }, { "docid": "f2c8af1f4bcf7115fc671ae9922adbb3", "text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.", "title": "" }, { "docid": "9668d1cc357a70780282dfdfe9ed4bda", "text": "A challenge in estimating students’ changing knowledge from sequential observations of their performance arises when each observed step involves multiple subskills. To overcome this mismatch in grain size between modelled skills and observed actions, we use logistic regression over each step’s subskills in a dynamic Bayes net (LR-DBN) to model transition probabilities for the overall knowledge required by the step. Unlike previous methods, LR-DBN can trace knowledge of the individual subskills without assuming they are independent. We evaluate how well it fits children’s oral reading fluency data logged by Project LISTEN’s Reading Tutor, compared to other methods.", "title": "" }, { "docid": "540063344df0b56fcc99bf8572e5e4d2", "text": "Groups play an essential role in many social websites which promote users' interactions and accelerate the diffusion of information. Recommending groups that users are really interested to join is significant for both users and social media. While traditional group recommendation problem has been extensively studied, we focus on a new type of the problem, i.e., event-based group recommendation. Unlike the other forms of groups, users join this type of groups mainly for participating offline events organized by group members or inviting other users to attend events sponsored by them. These characteristics determine that previously proposed approaches for group recommendation cannot be adapted to the new problem easily as they ignore the geographical influence and other explicit features of groups and users.\n In this paper, we propose a method called Pairwise Tag enhAnced and featuRe-based Matrix factorIzation for Group recommendAtioN (PTARMIGAN), which considers location features, social features, and implicit patterns simultaneously in a unified model. More specifically, we exploit matrix factorization to model interactions between users and groups. Meanwhile, we incorporate their profile information into pairwise enhanced latent factors respectively. We also utilize the linear model to capture explicit features. Due to the reinforcement between explicit features and implicit patterns, our approach can provide better group recommendations. We conducted a comprehensive performance evaluation on real word data sets and the experimental results demonstrate the effectiveness of our method.", "title": "" }, { "docid": "fb836666c993b27b99f6c789dd0aae05", "text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.", "title": "" }, { "docid": "0e55e64ddc463d0ea151de8efe40183f", "text": "Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems. & 2013 Published by Elsevier Ltd.", "title": "" }, { "docid": "1ddbe5990a1fc4fe22a9788c77307a9f", "text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.", "title": "" }, { "docid": "d7fa1b087984c9fa077a97eb064ec56b", "text": "Accelerators and heterogeneous architectures in general, and GPUs in particular, have recently emerged as major players in high performance computing. For many classes of applications, MapReduce has emerged as the framework for easing parallel programming and improving programmer productivity. There have already been several efforts on implementing MapReduce on GPUs.\n In this paper, we propose a new implementation of MapReduce for GPUs, which is very effective in utilizing shared memory, a small programmable cache on modern GPUs. The main idea is to use a reduction-based method to execute a MapReduce application. The reduction-based method allows us to carry out reductions in shared memory. To support a general and efficient implementation, we support the following features: a memory hierarchy for maintaining the reduction object, a multi-group scheme in shared memory to trade-off space requirements and locking overheads, a general and efficient data structure for the reduction object, and an efficient swapping mechanism.\n We have evaluated our framework with seven commonly used MapReduce applications and compared it with the sequential implementations, MapCG, a recent MapReduce implementation on GPUs, and Ji et al.'s work, a recent MapReduce implementation that utilizes shared memory in a different way. The main observations from our experimental results are as follows. For four of the seven applications that can be considered as reduction-intensive applications, our framework has a speedup of between 5 and 200 over MapCG (for large datasets). Similarly, we achieved a speedup of between 2 and 60 over Ji et al.'s work.", "title": "" }, { "docid": "3745ead7df976976f3add631ad175930", "text": "Natural products and traditional medicines are of great importance. Such forms of medicine as traditional Chinese medicine, Ayurveda, Kampo, traditional Korean medicine, and Unani have been practiced in some areas of the world and have blossomed into orderly-regulated systems of medicine. This study aims to review the literature on the relationship among natural products, traditional medicines, and modern medicine, and to explore the possible concepts and methodologies from natural products and traditional medicines to further develop drug discovery. The unique characteristics of theory, application, current role or status, and modern research of eight kinds of traditional medicine systems are summarized in this study. Although only a tiny fraction of the existing plant species have been scientifically researched for bioactivities since 1805, when the first pharmacologically-active compound morphine was isolated from opium, natural products and traditional medicines have already made fruitful contributions for modern medicine. When used to develop new drugs, natural products and traditional medicines have their incomparable advantages, such as abundant clinical experiences, and their unique diversity of chemical structures and biological activities.", "title": "" }, { "docid": "68de3b6f111b61cdf1babd4acbe5467d", "text": "Music recommender systems are lately seeing a sharp increase in popularity due to many novel commercial music streaming services. Most systems, however, do not decently take their listeners into account when recommending music items. In this note, we summarize our recent work and report our latest findings on the topics of tailoring music recommendations to individual listeners and to groups of listeners sharing certain characteristics. We focus on two tasks: context-aware automatic playlist generation (also known as serial recommendation) using sensor data and music artist recommendation using social media data.", "title": "" }, { "docid": "4c5ed8940b888a4eb2abc5791afd5a36", "text": "A low-gain antenna (LGA) is designed for high cross-polarization discrimination (XPD) and low backward radiation within the 8.025-8.4-GHz frequency band to mitigate cross-polarization and multipath interference given the spacecraft layout constraints. The X-band choke ring horn was optimized, fabricated, and measured. The antenna gain remains higher than 2.5 dBi for angles between 0° and 60° off-boresight. The XPD is higher than 15 dB from 0° to 40° and higher than 20 dB from 40° to 60° off-boresight. The calculated and measured data are in excellent agreement.", "title": "" }, { "docid": "bfac4c835d49bef4ad961b8e324c4559", "text": "We describe a new annotation scheme for formalizing relation structures in research papers. The scheme has been developed through the investigation of computer science papers. Using the scheme, we are building a Japanese corpus to help develop information extraction systems for digital libraries. We report on the outline of the annotation scheme and on annotation experiments conducted on research abstracts from the IPSJ Journal.", "title": "" }, { "docid": "8439f9d3e33fdbc43c70f1d46e2e143e", "text": "Redacting text documents has traditionally been a mostly manual activity, making it expensive and prone to disclosure risks. This paper describes a semi-automated system to ensure a specified level of privacy in text data sets. Recent work has attempted to quantify the likelihood of privacy breaches for text data. We build on these notions to provide a means of obstructing such breaches by framing it as a multi-class classification problem. Our system gives users fine-grained control over the level of privacy needed to obstruct sensitive concepts present in that data. Additionally, our system is designed to respect a user-defined utility metric on the data (such as disclosure of a particular concept), which our methods try to maximize while anonymizing. We describe our redaction framework, algorithms, as well as a prototype tool built in to Microsoft Word that allows enterprise users to redact documents before sharing them internally and obscure client specific information. In addition we show experimental evaluation using publicly available data sets that show the effectiveness of our approach against both automated attackers and human subjects.The results show that we are able to preserve the utility of a text corpus while reducing disclosure risk of the sensitive concept.", "title": "" } ]
scidocsrr
94d99487663bf6cc64ddad98ead3ae35
Invariant characterization of DOVID security features using a photometric descriptor
[ { "docid": "bb1b05062588056da569a4c15e669875", "text": "Holograms are used frequently in creating fraud resistant security documents, such as passports, ID cards or banknotes. The key contribution of this paper is a real time method to automatically detect holograms in images acquired with a standard smartphone. Following a robust algorithm for creating a tracking target, our approach evaluates an evolving stack of observations of the document to automatically determine the location and size of holograms. We demonstrate the plausibility of our method using a variety of security documents, which are common in everyday use. Finally, we show how suitable data can be captured in a novel mobile gaming experience and draw the link between serious applications and entertainment.", "title": "" } ]
[ { "docid": "d6dba7a89bc123bc9bb616df6faee2bc", "text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.", "title": "" }, { "docid": "8f91beade67a248cc0c063db42caabec", "text": "c:nt~ now, true videwon-dernaad can ody be atievsd hg a dedicated data flow for web service request. This brute force approach is probibitivdy &\\Tensive. Using mtiticast w si@cantly reduce the system rest. This solution, however, mu~t dday services in order to serve many requ~s as a hztch. h this paper, we consider a third alternative ded Pat&ing. h our technique, an e*mg mtiticast m expand dynarnidy to serve new &ents. ~otig new &ents to join an existiig rutiticast improves the ficiency of the rntiti-.. ~hermor~ since W requ~s can be served immediatdy, the &ents experience no service dday md true vide+on-dem~d ~ be achieve~ A si~cant contribution of tkis work, is making mdtiwork for true vide~ on-demand ssrvicw. h fact, we are able to tiate the service latency and improve the efficiency of mtiticast at the same time To assms the ben~t of this sdetne, w perform simdations to compare its performance +th that of standard rntiti-. Our simtiation rats indicate convincingly that Patching offers .wbstanti~y better perforrnace.", "title": "" }, { "docid": "ac3ed36f4253525ff54bf4b0931479fc", "text": "This paper presents a design for a high-efficiency power amplifier with an output power of more than 100W, and an ultra-broad bandwidth from 10 to 500MHz. The amplifier has a 4-way push-pull configuration using Guanella's 1∶1 transmission line transformer. A negative feedback network is adopted to make the power gain flat enough over the operating bandwidth. The implemented power amplifier exhibits a power gain of 29.2±1.8dB from 10 to 500MHz band with its power-added efficiency (PAE) being greater than 43%, and the second-and third-harmonic distortions are below −29dBc and −9.78dBc, respectively, at an output power of 100W over the entire frequency band.", "title": "" }, { "docid": "dec0ff25de96faef92f9221085aba523", "text": "Atopic dermatitis (AD) is characterized by allergic skin inflammation. A hallmark of AD is dry itchy skin due, at least in part, to defects in skin genes that are important for maintaining barrier function. The pathogenesis of AD remains incompletely understood. Since the description of the Nc/Nga mouse as a spontaneously occurring model of AD, a number of other mouse models of AD have been developed. They can be categorized into three groups: (1) models induced by epicutaneous application of sensitizers; (2) transgenic mice that either overexpress or lack selective molecules; (3) mice that spontaneously develop AD-like skin lesions. These models have resulted in a better understanding of the pathogenesis of AD. This review discusses these models and emphasizes the role of mechanical skin injury and skin barrier dysfunction in eliciting allergic skin inflammation.", "title": "" }, { "docid": "ac7789e3e36716496ed01800f4099412", "text": "Dietary assessment is essential for understanding the link between diet and health. We develop a context based image analysis system for dietary assessment to automatically segment, identify and quantify food items from images. In this paper, we describe image segmentation and object classification methods used in our system to detect and identify food items. We then use context information to refine the classification results. We define contextual dietary information as the data that is not directly produced by the visual appearance of an object in the image, but yields information about a user’s diet or can be used for diet planning. We integrate contextual dietary information that a user supplies to the system either explicitly or implicitly to correct potential misclassifications. We evaluate our models using food image datasets collected during dietary assessment studies from natural eating events.", "title": "" }, { "docid": "dd5f7e40cda2967f5174b2706500e9f4", "text": "Due to the complexity of Service-Oriented Architecture (SOA), cost and effort estimation for SOA-based software development is more difficult than that for traditional software development. Unfortunately, there is a lack of published work about cost and effort estimation for SOA-based software. Existing cost estimation approaches are inadequate to address the complex service-oriented systems. This paper proposes a novel framework based on Divide-and-Conquer (D&C) for cost estimation for building SOA-based software. By dealing with separately development parts, the D&C framework can help organizations simplify and regulate SOA implementation cost estimation. Furthermore, both cost estimation modeling and software sizing work can be satisfied respectively by switching the corresponding metrics within this framework. Given the requirement of developing these metrics, this framework also defines the future research in four different directions according to the separate cost estimation sub-problems.", "title": "" }, { "docid": "49575576bc5a0b949c81b0275cbc5f41", "text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "title": "" }, { "docid": "4029bbbff0c115c8bf8c787cafc72ae0", "text": "In recent times, data is growing rapidly in every domain such as news, social media, banking, education, etc. Due to the excessiveness of data, there is a need of automatic summarizer which will be capable to summarize the data especially textual data in original document without losing any critical purposes. Text summarization is emerged as an important research area in recent past. In this regard, review of existing work on text summarization process is useful for carrying out further research. In this paper, recent literature on automatic keyword extraction and text summarization are presented since text summarization process is highly depend on keyword extraction. This literature includes the discussion about different methodology used for keyword extraction and text summarization. It also discusses about different databases used for text summarization in several domains along with evaluation matrices. Finally, it discusses briefly about issues and research challenges faced by researchers along with future direction.", "title": "" }, { "docid": "8027856b5e9fd0112a6b9950b2901ba5", "text": "In order to make the Web services, Web applications in Java more powerful, flexible and user friendly, building unified Web applications is very significant. By introducing a new style-Representational State Transfer, this paper studied the goals and design principles of REST, the idea of REST and RESTful Web service design principles, RESTful style Web service, RESTful Web service frameworks in Java and the ways to develop RESTful Web service. The RESTful Web Service frameworks in Java can effectively simplify the Web development in many aspects.", "title": "" }, { "docid": "b0903440893a25a91c575fd96b5524fa", "text": "With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.", "title": "" }, { "docid": "40d2b1e5b12a3239aed16cd1691037a2", "text": "Identifiers in programs contain semantic information that might be leveraged to build tools that help programmers write code. This work explores using RNN models to predict Haskell type signatures given the name of the entity being typed. A large corpus of real-world type signatures is gathered from online sources for training and evaluation. In real-world Haskell files, the same type signature is often immediately repeated for a new name. To attempt to take advantage of this repetition, a varying attention mechanism was developed and evaluated. The RNN models explored show some facility at predicting type signature structure from the name, but not the entire signature. The varying attention mechanism provided little gain.", "title": "" }, { "docid": "3cff653dc452df2163d7cc67cf9e0dd6", "text": "In this paper we propose the construction of linguistic descriptions of images. This is achieved through the extraction of scene description graphs (SDGs) from visual scenes using an automatically constructed knowledge base. SDGs are constructed using both vision and reasoning. Specifically, commonsense reasoning1 is applied on (a) detections obtained from existing perception methods on given images, (b) a “commonsense” knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most cases, sentences auto-constructed from SDGs obtained by our method give a more relevant and thorough description of an image than a recent state-of-the-art image caption based approach. Our Image-Sentence Alignment Evaluation results are also comparable to that of the recent state-of-the art approaches.", "title": "" }, { "docid": "adf2c205a9d14e285ac590dfd216106b", "text": "In the system in which we are currently inserted, happiness and work seem to be completely exclusive words and without any possibility of association. The following study intends to show that it is possible to find a balance between happiness and work (Csikszentmihalyi in Gestão qualificada: a conexão entre felicidade e negócio, Bookman, Porto Alegre, 2004), by diagnosing the level of happiness shown by people in their working environment’s, proposing solutions to enhance happiness and productive in certain enterprise’s staff (Tidd and Bessant in Innovation and entrepreneurship, Wiley, London, 2007; Gestão da Inovação, Bookman, Porto Alegre, 2008). It is necessary, hence, an analysis beyond the happy-unhappy dichotomy when observing the different happiness levels presented in the routine and in the working hours, based on the interactions between creativity and innovation (Amabile in KEYS to creativity and innovation: user’s guide, Center For Creative Leadership, Greensboro, 2010; Sawyer in Explaining creativity: the science of human innovation, 2nd edn, Oxford University Press, Cambridge, 2012; West and Richter in Handbook of organizational creativity, 1st edn, Taylor and Francis, New York, 2008). Furthermore, this article shows that it is essential to combine the present’s happiness to a long-term project, an optimistic vision for the future (James and Drown in Handbook of organizational creativity, 1st edn, Elsevier, San Diego, 2012). To evaluate the work and a person’s life, this article develops a Multi Criteria Model of Work Organization and Evaluation and a Map of the Corporate Happiness Levels as a conscious path through workplace, collaboration in the enterprise, the marketplace and society, up to the personal and social life of a subject (Kamel in Artesão da minha própria felicidade, 1st edn. E-papers Serviços Editoriais Ltda, Rio de Janeiro, 2007). It offers ten practical recommendations to raise corporation rates of happiness. Accordingly, this paper proves its relevance by offering reference material to professionals and enterprises who search changes in its current personal management policy, willing to move towards a society with increasingly fulfilled professionals, happy and productive in their own employment.", "title": "" }, { "docid": "89dea4ec4fd32a4a61be184d97ae5ba6", "text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.", "title": "" }, { "docid": "eb8d1663cf6117d76a6b61de38b55797", "text": "Many security experts would agree that, had it not been for mobile configurations, the synthesis of online algorithms might never have occurred. In fact, few computational biologists would disagree with the evaluation of von Neumann machines. We construct a peer-to-peer tool for harnessing Smalltalk, which we call TalmaAment.", "title": "" }, { "docid": "687578295dc3cbfeac923c67d606b7c0", "text": "In this paper we introduce a flexible, side-fed, ultra-wideband, spiral antenna with an integrated microstrip tapered infinite balun operating from 1-5GHz. It was fabricated using Rogers 5880 subtrate and specially coated to prevent oxidization. The simulation and measurements match quite well with S11 below -10dB throughout the frequency range and gain of close to 5dB and average efficiency of 65%. Most importantly, this antenna does not have an obtrusive center feed, and instead has a side feed, is flexible, compact, rendering many possible wearable antenna applications.", "title": "" }, { "docid": "ab1e4a8b0a4d00af488923ea52053aee", "text": "This paper describes Steve, an animated agent that helps students learn to perform physical, procedural tasks. The student and Steve cohabit a three-dimensional, simulated mock-up of the student's work environment. Steve can demonstrate how to perform tasks and can also monitor students while they practice tasks, providing assistance when needed. This paper describes Steve's architecture in detail, including perception, cognition, and motor control. The perception module monitors the state of the virtual world, maintains a coherent representation of it, and provides this information to the cognition and motor control modules. The cognition module interprets its perceptual input, chooses appropriate goals, constructs and executes plans to achieve those goals, and sends out motor commands. The motor control module implements these motor commands, controlling Steve's voice, locomotion, gaze, and gestures, and allowing Steve to manipulate objects in the virtual world.", "title": "" }, { "docid": "682b3d97bdadd988b0a21d5dd6774fbc", "text": "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development.", "title": "" }, { "docid": "db04a402e0c7d93afdaf34c0d55ded9a", "text": " Drowsiness and increased tendency to fall asleep during daytime is still a generally underestimated problem. An increased tendency to fall asleep limits the efficiency at work and substantially increases the risk of accidents. Reduced alertness is difficult to assess, particularly under real life settings. Most of the available measuring procedures are laboratory-oriented and their applicability under field conditions is limited; their validity and sensitivity are often a matter of controversy. The spontaneous eye blink is considered to be a suitable ocular indicator for fatigue diagnostics. To evaluate eye blink parameters as a drowsiness indicator, a contact-free method for the measurement of spontaneous eye blinks was developed. An infrared sensor clipped to an eyeglass frame records eyelid movements continuously. In a series of sessions with 60 healthy adult participants, the validity of spontaneous blink parameters was investigated. The subjective state was determined by means of questionnaires immediately before the recording of eye blinks. The results show that several parameters of the spontaneous eye blink can be used as indicators in fatigue diagnostics. The parameters blink duration and reopening time in particular change reliably with increasing drowsiness. Furthermore, the proportion of long closure duration blinks proves to be an informative parameter. The results demonstrate that the measurement of eye blink parameters provides reliable information about drowsiness/sleepiness, which may also be applied to the continuous monitoring of the tendency to fall asleep.", "title": "" }, { "docid": "f635a83b4e1a19e07bd61406d5fcb3f4", "text": "Cloud computing is an attractive computing model since it allows for resources to be provisioned according on a demand basis, i.e., cloud users can rent resources as they become necessary. This model motivated several academic and non-academic institutions to develop open-source cloud solutions. This paper presents and discusses the state-of-the of open-source solutions for cloud computing. The authors hope that the observation and classification of such solutions can leverage the cloud computing research area providing a good starting point to cope with some of the problems present in cloud computing environments.", "title": "" } ]
scidocsrr
47c80f958eef4cb10cc1b6470162f6eb
Bosphorus Database for 3D Face Analysis
[ { "docid": "62e90693f722fe79e3b2f18719325550", "text": "Non-rigid surface registration, particularly registration of human faces, finds a wide variety of applications in computer vision and graphics. We present a new automatic surface registration method which utilizes both attraction forces originating from geometrical and textural similarities, and stresses due to non-linear elasticity of the surfaces. Reference and target surfaces are first mapped onto their feature image planes, then these images are registered by subjecting them to local deformations, and finally 3D correspondences are established. Surfaces are assumed to be elastic sheets and are represented by triangular meshes. The internal elastic forces act as a regularizer in this ill-posed problem. Furthermore, the non-linear elasticity model allows us to handle large deformations, which can be essential, for instance, for facial expressions. The method has been tested successfully on 3D scanned human faces, with and without expressions. The algorithm runs quite efficiently using a multiresolution approach.", "title": "" }, { "docid": "ae3a54128bb29272e5cb3552236b6f12", "text": "Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation", "title": "" } ]
[ { "docid": "e043ddde866f3c5b93f1732cd87c9932", "text": "To explore the potential of training complex deep neural networks (DNNs) on other commercial chips rather than GPUs, we report our work on swDNN, which is a highly-efficient library for accelerating deep learning applications on the newly announced world-leading supercomputer, Sunway TaihuLight. Targeting SW26010 processor, we derive a performance model that guides us in the process of identifying the most suitable approach for mapping the convolutional neural networks (CNNs) onto the 260 cores within the chip. By performing a systematic optimization that explores major factors, such as organization of convolution loops, blocking techniques, register data communication schemes, as well as reordering strategies for the two pipelines of instructions, we manage to achieve a double-precision performance over 1.6 Tflops for the convolution kernel, achieving 54% of the theoretical peak. Compared with Tesla K40m with cuDNNv5, swDNN results in 1.91-9.75x performance speedup in an evaluation with over 100 parameter configurations.", "title": "" }, { "docid": "56e9859575931b0d4f56210166cb8a77", "text": "Our objective was to confirm that it is feasible to take images of the male and female genitals during coitus and to compare this present study with previous theories and recent radiological studies of the anatomy during sexual intercourse. Magnetic resonance imaging was used to study the anatomy of the male and female genitals during coitus. Three experiments were performed with one couple in two positions and after male ejaculation. The images obtained confirmed that during intercourse in the missionary position, the penis reaches the anterior fornix with preferential contact of the anterior vaginal wall. The posterior bladder wall was pushed forward and upward and the uterus was pushed upward and backward. The images obtained from the rear-entry position showed for the first time that the penis seems to reach the posterior fornix with preferential contact of the posterior vaginal wall. In this position, the bladder and uterus were pushed forward. A different preferential contact of the penis with the female genitals was observed with each position. These images could contribute to a better understanding of the anatomy of sexual intercourse.", "title": "" }, { "docid": "0db3b59ed92aa200f3cb8f01540fa6fe", "text": "Epidemiological studies have shown protective effects of fruits and vegetables (F&V) in lowering the risk of developing cardiovascular diseases (CVD) and cancers. Plant-derived dietary fibre (non-digestible polysaccharides) and/or flavonoids may mediate the observed protective effects particularly through their interaction with the gut microbiota. The aim of this study was to assess the impact of fruit and vegetable (F&V) intake on gut microbiota, with an emphasis on the role of flavonoids, and further to explore relationships between microbiota and factors associated with CVD risk. In the study, a parallel design with 3 study groups, participants in the two intervention groups representing high-flavonoid (HF) and low flavonoid (LF) intakes were asked to increase their daily F&V intake by 2, 4 and 6 portions for a duration of 6 weeks each, while a third (control) group continued with their habitual diet. Faecal samples were collected at baseline and after each dose from 122 subjects. Faecal bacteria enumeration was performed by fluorescence in situ hybridisation (FISH). Correlations of dietary components, flavonoid intake and markers of CVD with bacterial numbers were also performed. A significant dose X treatment interaction was only found for Clostidium leptum-Ruminococcus bromii/flavefaciens with a significant increase after intake of 6 additional portions in the LF group. Correlation analysis of the data from all 122 subjects independent from dietary intervention indicated an inhibitory role of F&V intake, flavonoid content and sugars against the growth of potentially pathogenic clostridia. Additionally, we observed associations between certain bacterial populations and CVD risk factors including plasma TNF-α, plasma lipids and BMI/waist circumference.", "title": "" }, { "docid": "48096a9a7948a3842afc082fa6e223a6", "text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.", "title": "" }, { "docid": "0aa84826291bb9b7a15a1edac43b3b2e", "text": "Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the system's physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations.", "title": "" }, { "docid": "eb88c46211dade104770d8dcc89b5386", "text": "An improved method for the preparation of graphene oxide (GO) is described. Currently, Hummers' method (KMnO(4), NaNO(3), H(2)SO(4)) is the most common method used for preparing graphene oxide. We have found that excluding the NaNO(3), increasing the amount of KMnO(4), and performing the reaction in a 9:1 mixture of H(2)SO(4)/H(3)PO(4) improves the efficiency of the oxidation process. This improved method provides a greater amount of hydrophilic oxidized graphene material as compared to Hummers' method or Hummers' method with additional KMnO(4). Moreover, even though the GO produced by our method is more oxidized than that prepared by Hummers' method, when both are reduced in the same chamber with hydrazine, chemically converted graphene (CCG) produced from this new method is equivalent in its electrical conductivity. In contrast to Hummers' method, the new method does not generate toxic gas and the temperature is easily controlled. This improved synthesis of GO may be important for large-scale production of GO as well as the construction of devices composed of the subsequent CCG.", "title": "" }, { "docid": "495be81dda82d3e4d90a34b6716acf39", "text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.", "title": "" }, { "docid": "ff71838a3f8f44e30dc69ed2f9371bfc", "text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.", "title": "" }, { "docid": "68a84156f64d4d1926a52d60fc3eadf3", "text": "Parkinson's disease is a common and disabling disorder of movement owing to dopaminergic denervation of the striatum. However, it is still unclear how this denervation perverts normal functioning to cause slowing of voluntary movements. Recent work using tissue slice preparations, animal models and in humans with Parkinson's disease has demonstrated abnormally synchronized oscillatory activity at multiple levels of the basal ganglia-cortical loop. This excessive synchronization correlates with motor deficit, and its suppression by dopaminergic therapies, ablative surgery or deep-brain stimulation might provide the basic mechanism whereby diverse therapeutic strategies ameliorate motor impairment in patients with Parkinson's disease. This review is part of the INMED/TINS special issue, Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).", "title": "" }, { "docid": "d509cb384ecddafa0c4f866882af2c77", "text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.", "title": "" }, { "docid": "7cecfd37e44b26a67bee8e9c7dd74246", "text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.", "title": "" }, { "docid": "2ba69997f51aa61ffeccce33b2e69054", "text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.", "title": "" }, { "docid": "692f80bda530610858312da98bc49815", "text": "Loss of heterozygosity (LOH) at locus 10q23.3 and mutation of the PTEN tumor suppressor gene occur frequently in both endometrial carcinoma and ovarian endometrioid carcinoma. To investigate the potential role of the PTEN gene in the carcinogenesis of ovarian endometrioid carcinoma and its related subtype, clear cell carcinoma, we examined 20 ovarian endometrioid carcinomas, 24 clear cell carcinomas, and 34 solitary endometrial cysts of the ovary for LOH at 10q23.3 and point mutations within the entire coding region of the PTEN gene. LOH was found in 8 of 19 ovarian endometrioid carcinomas (42.1%), 6 of 22 clear cell carcinomas (27.3%), and 13 of 23 solitary endometrial cysts (56.5%). In 5 endometrioid carcinomas synchronous with endometriosis, 3 cases displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 1 displayed no LOH events in either lesion. In 7 clear cell carcinomas synchronous with endometriosis, 3 displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 3 displayed no LOH events in either lesion. In no cases were there LOH events in the endometriosis only. Somatic mutations in the PTEN gene were identified in 4 of 20 ovarian endometrioid carcinomas (20.0%), 2 of 24 clear cell carcinomas (8.3%), and 7 of 34 solitary endometrial cysts (20.6%). These results indicate that inactivation of the PTEN tumor suppressor gene is an early event in the development of ovarian endometrioid carcinoma and clear cell carcinoma of the ovary.", "title": "" }, { "docid": "2b00f2b02fa07cdd270f9f7a308c52c5", "text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.", "title": "" }, { "docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad", "text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.", "title": "" }, { "docid": "ca659ea60b5d7c214460b32fe5aa3837", "text": "Address Decoder is an important digital block in SRAM which takes up to half of the total chip access time and significant part of the total SRAM power in normal read/write cycle. To design address decoder need to consider two objectives, first choosing the optimal circuit technique and second sizing of their transistors. Novel address decoder circuit is presented and analysed in this paper. Address decoder using NAND-NOR alternate stages with predecoder and replica inverter chain circuit is proposed and compared with traditional and universal block architecture, using 90nm CMOS technology. Delay and power dissipation in proposed decoder is 60.49% and 52.54% of traditional and 82.35% and 73.80% of universal block architecture respectively.", "title": "" }, { "docid": "535e0f5506e7dbd61e566dd163a936ef", "text": "Legislating five of the main risk factors for road traffic injuries (RTIs), as much as enforcing the law, is essential in forging an integral culture of road safety. Analysis of the level of progression in law enforcement allows for an evaluation of the state of world regions. A secondary analysis of the 2009 Global status report on road safety: time for action survey was undertaken to evaluate legislation on five risk factors (speed management, drinking and driving, motorcycle helmet use, seatbelt use, and use of child restraints) in the Americas. Laws were classified depending on their level of progression: the existence of legislation, whether the legislation is adequate, a level of law enforcement > 6 (on a scale of 0-10), and whether the law is considered comprehensive. A descriptive analysis was performed. The totality of the countries has national or subnational legislation for at least one of the five risk factors. However, 63% have laws on the five risk factors studied, and none of them has comprehensive laws for all five. Seatbelt use appears to be the most extended enforced legislation, while speeding laws appear to be the least enforced. There are positive efforts that should be recognized in the region. However, the region stands in different stages of progression. Law enforcement remains the main issue to be tackled. Laws should be based on evidence about what is already known to be effective.", "title": "" }, { "docid": "dc6d342a2bc0caaa0ede564c85993dc0", "text": "Exoticism is the charm of the unfamiliar, it often means unusual, mystery, and it can evoke the atmosphere of remote lands. Although it has received interest in different arts, like painting and music, no study has been conducted on understanding exoticism from a computational perspective. To the best of our knowledge, this work is the first to explore the problem of exoticism-aware image classification, aiming at automatically measuring the amount of exoticism in images and investigating the significant aspects of the task. The estimation of image exoticism could be applied in fields like advertising and travel suggestion, as well as to increase serendipity and diversity of recommendations and search results. We propose a Fusion-based Deep Neural Network (FDNN) for this task, which combines image representations learned by Deep Neural Networks with visual and semantic hand-crafted features. Comparisons with other Machine Learning models show that our proposed architecture is the best performing one, reaching accuracy over 83% and 91% on two different datasets. Moreover, experiments with classifiers exploiting both visual and semantic features allow to analyze what are the most important aspects for identifying exotic content. Ground truth has been gathered by retrieving exotic and not exotic images through a web search engine by posing queries with exotic and not exotic semantics, and then assessing the exoticism of the retrieved images via a crowdsourcing evaluation. The dataset is publicly released to promote advances in this novel field.", "title": "" }, { "docid": "8e23ef656b501814fc44c609feebe823", "text": "This paper proposes an approach for segmentation and semantic labeling of RGBD data based on the joint usage of geometrical clues and deep learning techniques. An initial oversegmentation is performed using spectral clustering and a set of NURBS surfaces is then fitted on the extracted segments. The input data are then fed to a Convolutional Neural Network (CNN) together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a per-pixel descriptor vector for each sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The couples of adjacent segments with higher similarity according to the CNN features are considered for merging and the NURBS surface fitting accuracy is used in order to understand if the selected couples correspond to a single surface. By combining the obtained segmentation with the descriptors from the CNN a set of labeled segments is obtained. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation and labeling.", "title": "" }, { "docid": "fed956373dc9c477d393be5087e8fbc7", "text": "We develop a quantitative method to assess the style of American poems and to visualize a collection of poems in relation to one another. Qualitative poetry criticism helped guide our development of metrics that analyze various orthographic, syntactic, and phonemic features. These features are used to discover comprehensive stylistic information from a poem's multi-layered latent structure, and to compute distances between poems in this space. Visualizations provide ready access to the analytical components. We demonstrate our method on several collections of poetry, showing that it better delineates poetry style than the traditional word-occurrence features that are used in typical text analysis algorithms. Our method has potential applications to academic research of texts, to research of the intuitive personal response to poetry, and to making recommendations to readers based on their favorite poems.", "title": "" } ]
scidocsrr
02e57657fd968bae41bcdb8bb4311434
Multilevel Sensor Fusion With Deep Learning
[ { "docid": "577f373477f6b8a8bee6a694dab6d3c9", "text": "The YouTube-8M video classification challenge requires teams to classify 0.7 million videos into one or more of 4,716 classes. In this Kaggle competition, we placed in the top 3% out of 650 participants using released video and audio features . Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text. The newly introduced text data is termed as YouTube-8M-Text. We present a classification framework for the joint use of text, visual and audio features, and conduct an extensive set of experiments to quantify the benefit that this additional mode brings. The inclusion of text yields state-of-the-art results, e.g. 86.7% GAP on the YouTube-8M-Text validation dataset.", "title": "" }, { "docid": "92da117d31574246744173b339b0d055", "text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.", "title": "" } ]
[ { "docid": "0303e972f1af37ec8a82b0b5b1acf14e", "text": "We present a case report of broncho-biliary fistula that developed due to the blockage of biliary stent placed during the management of pancreatic neuroendocrine tumor (pNET); diagnosed on high clinical suspicion, percutaneous cholangiogram and contrast enhanced computed tomography (CECT); and successfully treated with percutaneous transhepatic biliary drainage (PTBD).", "title": "" }, { "docid": "123b35d403447a29eaf509fa707eddaa", "text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.", "title": "" }, { "docid": "5401143c61a2a0ad2901bd72a086368b", "text": "In this paper we provide an implementation, evaluation, and analysis of PowerHammer, a malware (bridgeware [1]) that uses power lines to exfiltrate data from air-gapped computers. In this case, a malicious code running on a compromised computer can control the power consumption of the system by intentionally regulating the CPU utilization. Data is modulated, encoded, and transmitted on top of the current flow fluctuations, and then it is conducted and propagated through the power lines. This phenomena is known as a ’conducted emission’. We present two versions of the attack. Line level powerhammering: In this attack, the attacker taps the in-home power lines that are directly attached to the electrical outlet. Phase level power-hammering: In this attack, the attacker taps the power lines at the phase level, in the main electrical service panel. In both versions of the attack, the attacker measures the emission conducted and then decodes the exfiltrated data. We describe the adversarial attack model and present modulations and encoding schemes along with a transmission protocol. We evaluate the covert channel in different scenarios and discuss signal-to-noise (SNR), signal processing, and forms of interference. We also present a set of defensive countermeasures. Our results show that binary data can be covertly exfiltrated from air-gapped computers through the power lines at bit rates of 1000 bit/sec for the line level power-hammering attack and 10 bit/sec for the phase level power-hammering attack.", "title": "" }, { "docid": "d450b99022a9db9191c5da074dd8ae47", "text": "Currency recognition is an important task in numerous automated payment services and used to categorize the banknotes of different nation. The importance of automatic methods for currency recognition has been increasing in the time being because of circulation of fake notes is increased in today's economy. This recognition system contains basic image processing techniques such like image acquisition, image preprocesses, extract features and classification using support vector machine. Basically camera or scanner used for image acquisition. The images of currency processed using a variety of preprocessing techniques and different features of the image extracted using local binary pattern technique, once the features are extracted it is important to recognize the currency using effective classifier called Support vector machine and Finally a prototype able to recognize Ethiopian paper currency with accuracy of 98% shows high performance classification model for paper currency recognition and also verify the validity of given banknotes with average accuracy of 93% rate.", "title": "" }, { "docid": "754c7cd279c8f3c1a309071b8445d6fa", "text": "We present a framework for describing insiders and their actions based on the organization, the environment, the system, and the individual. Using several real examples of unwelcome insider action (hard drive removal, stolen intellectual property, tax fraud, and proliferation of e-mail responses), we show how the taxonomy helps in understanding how each situation arose and could have been addressed. The differentiation among types of threats suggests how effective responses to insider threats might be shaped, what choices exist for each type of threat, and the implications of each. Future work will consider appropriate strategies to address each type of insider threat in terms of detection, prevention, mitigation, remediation, and punishment.", "title": "" }, { "docid": "44e3ca0f64566978c3e0d0baeaa93543", "text": "Many applications of fast Fourier transforms (FFT’s), such as computer tomography, geophysical signal processing, high-resolution imaging radars, and prediction filters, require high-precision output. An error analysis reveals that the usual method of fixed-point computation of FFT’s of vectors of length2 leads to an average loss of/2 bits of precision. This phenomenon, often referred to as computational noise, causes major problems for arithmetic units with limited precision which are often used for real-time applications. Several researchers have noted that calculation of FFT’s with algebraic integers avoids computational noise entirely, see, e.g., [1]. We will combine a new algorithm for approximating complex numbers by cyclotomic integers with Chinese remaindering strategies to give an efficient algorithm to compute -bit precision FFT’s of length . More precisely, we will approximate complex numbers by cyclotomic integers in [ 2 2 ] whose coefficients, when expressed as polynomials in 2 2 , are bounded in absolute value by some integer . For fixed our algorithm runs in time (log( )), and produces an approximation with worst case error of (1 2 ). We will prove that this algorithm has optimal worst case error by proving a corresponding lower bound on the worst case error of any approximation algorithm for this task. The main tool for designing the algorithms is the use of the cyclotomic units, a subgroup of finite index in the unit group of the cyclotomic field. First implementations of our algorithms indicate that they are fast enough to be used for the design of low-cost high-speed/highprecision FFT chips.", "title": "" }, { "docid": "89d895248235c7395fe1f12a39ee7267", "text": "This work elucidates the solder reflow of eutectic (63Sn/37Pb) solder bump using fluxless formic acid. The dependences of formic acid reflow on metallic oxide reduction are investigated experimentally for eutectic solder bump. Appropriate temperature profile and sufficient formic acid concentration are the key factors to optimize the metallic oxide reduction during thermal reflow. A positive pressure in process chamber is beneficial to control the variations of unwanted oxygen and the regrowth of metallic oxide during mechanical wafer switching. A reflowed solder joint degrades considerably under shear strength testing after several reflow times.", "title": "" }, { "docid": "67a62792ba0283e84ace7937615d3090", "text": "Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to degrade the agent. To address these issues, we present Deep Dyna-Q, which to our knowledge is the first deep RL framework that integrates planning for task-completion dialogue policy learning. We incorporate into the dialogue agent a model of the environment, referred to as the world model, to mimic real user response and generate simulated experience. During dialogue policy learning, the world model is constantly updated with real user experience to approach real user behavior, and in turn, the dialogue agent is optimized using both real experience and simulated experience. The effectiveness of our approach is demonstrated on a movie-ticket booking task in both simulated and human-in-theloop settings1.", "title": "" }, { "docid": "2650f7a5d0381056d642eb1904462338", "text": "We examined contrast sensitivity and suprathreshold apparent contrast with natural images. The spatial-frequency components within single octaves of the images were removed (notch filtered), their phases were randomized, or the polarity of the images was inverted. Of Michelson contrast, root-mean-square (RMS) contrast, and band-limited contrast, RMS contrast was the best index of detectability. Negative images had lower apparent contrast than their positives. Contrast detection thresholds showed spatial-frequency-dependent elevation following both notch filtering and phase randomization. The peak of the spatial-frequency tuning function was approximately 0.5-2 cycles per degree (c/deg). Suprathreshold contrast matching functions also showed spatial-frequency-dependent contrast loss for both notch-filtered and phase-randomized images. The peak of the spatial-frequency tuning function was approximately 1-3 c/deg. There was no detectable difference between the effects of phase randomization and notch filtering on contrast sensitivity. We argue that these observations are consistent with changes in the activity within spatial-frequency channels caused by the higher-order phase structure of natural images that is responsible for the presence of edges and specularities.", "title": "" }, { "docid": "2b34bd482087018827171ca52c2865fa", "text": "Statistical mediation and moderation analysis are widespread throughout the behavioral sciences. Increasingly, these methods are being integrated in the form of the analysis of ―mediated moderation‖ or ―moderated mediation,‖ or what Hayes and Preacher (in press) call conditional process modeling. In this paper, I offer a primer on some of the important concepts and methods in mediation analysis, moderation analysis, and conditional process modeling prior to describing PROCESS, a versatile modeling tool freely-available for SPSS and SAS that integrates many of the functions of existing and popular published statistical tools for mediation and moderation analysis as well as their integration. Examples of the use of PROCESS are provided, and some of its additional features as well as some limitations are described.", "title": "" }, { "docid": "f21e0b6062b88a14e3e9076cdfd02ad5", "text": "Beyond being facilitators of human interactions, social networks have become an interesting target of research, providing rich information for studying and modeling user’s behavior. Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in our current research efforts. This paper explores the feasibility of modeling user personality based on a proposed set of features extracted from the Facebook data. The encouraging results of our study, exploring the suitability and performance of several classification techniques, will also be presented.", "title": "" }, { "docid": "a6fe5ebfc0a58b246005603854be07a0", "text": "Social networking sites (SNS) are quickly becoming one of the most popular tools for social interaction and information exchange. Previous research has shown a relationship between users’ personality and SNS use. Using a general population sample (N=300), this study furthers such investigations by examining the personality correlates (Neuroticism, Extraversion, Openness-to-Experience, Agreeableness, Conscientiousness, Sociability and Need-for-Cognition) of social and informational use of the two largest SNS: Facebook and Twitter. Age and Gender were also examined. Results showed that personality was related to online socialising and information seeking/exchange, though not as influential as some previous research has suggested. In addition, a preference for Facebbok or Twitter was associated with differences in personality. The results reveal differential relationships between personality and Facebook and Twitter usage.", "title": "" }, { "docid": "d233e7031b84316f66a4f4568c907545", "text": "The specific biomechanical alterations related to vitality loss or endodontic procedures are confusing issues for the practitioner and have been controversially approached from a clinical standpoint. The aim of part 1 of this literature review is to present an overview of the current knowledge about composition changes, structural alterations, and status following endodontic therapy and restorative procedures. The basic search process included a systematic review of the PubMed/Medline database between 1990 and 2005, using single or combined key words to obtain the most comprehensive list of references; a perusal of the references of the relevant sources completed the review. Only negligible alterations in tissue moisture and composition attributable to vitality loss or endodontic therapy were reported. Loss of vitality followed by proper endodontic therapy proved to affect tooth biomechanical behavior only to a limited extent. Conversely, tooth strength is reduced in proportion to coronal tissue loss, due to either caries lesion or restorative procedures. Therefore, the best current approach for restoring endodontically treated teeth seems to (1) minimize tissue sacrifice, especially in the cervical area so that a ferrule effect can be created, (2) use adhesive procedures at both radicular and coronal levels to strengthen remaining tooth structure and optimize restoration stability and retention, and (3) use post and core materials with physical properties close to those of natural dentin, because of the limitations of current adhesive procedures.", "title": "" }, { "docid": "0c42d9b5831d9e982c29a0b0b4993309", "text": "Insider threat detection requires the identification of rare anomalies in contexts where evolving behaviors tend to mask such anomalies. This paper proposes and tests an ensemble-based stream mining algorithm based on supervised learning that addresses this challenge by maintaining an evolving collection of multiple models to classify dynamic data streams of unbounded length. The result is a classifier that exhibits substantially increased classification accuracy for real insider threat streams relative to traditional supervised learning (traditional SVM and one-class SVM) and other single-model approaches.", "title": "" }, { "docid": "c8a394768233029c04bee3634b4b9f6b", "text": "There are two open problems when finite mixture densities are used to model multivariate data: the selection of the number of components and the initialization. In this paper, we propose an online (recursive) algorithm that estimates the parameters of the mixture and that simultaneously selects the number of components. The new algorithm starts with a large number of randomly initialized components. A prior is used as a bias for maximally structured models. A stochastic approximation recursive learning algorithm is proposed to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components.", "title": "" }, { "docid": "5e4326bed40293855264b48b4875fa5d", "text": "Platform-tolerant tag antennas are desired for ubiquitous RFID systems. Metal-mountable or wideband tag antennas can not guarantee platform-tolerance. This paper presents the design approaches of platform-tolerant tag antennas. A compact PIFA-type UHF tag antenna is proposed accordingly. Simulation and measurement results are provided to demonstrate the platform-tolerance feature of the proposed antenna and to validate the design approaches presented.", "title": "" }, { "docid": "4f186e992cd7d5eadb2c34c0f26f4416", "text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …", "title": "" }, { "docid": "1bd9467a7fafcdb579f8a4cd1d7be4b3", "text": "OBJECTIVE\nTo determine the diagnostic and triage accuracy of online symptom checkers (tools that use computer algorithms to help patients with self diagnosis or self triage).\n\n\nDESIGN\nAudit study.\n\n\nSETTING\nPublicly available, free symptom checkers.\n\n\nPARTICIPANTS\n23 symptom checkers that were in English and provided advice across a range of conditions. 45 standardized patient vignettes were compiled and equally divided into three categories of triage urgency: emergent care required (for example, pulmonary embolism), non-emergent care reasonable (for example, otitis media), and self care reasonable (for example, viral upper respiratory tract infection).\n\n\nMAIN OUTCOME MEASURES\nFor symptom checkers that provided a diagnosis, our main outcomes were whether the symptom checker listed the correct diagnosis first or within the first 20 potential diagnoses (n=770 standardized patient evaluations). For symptom checkers that provided a triage recommendation, our main outcomes were whether the symptom checker correctly recommended emergent care, non-emergent care, or self care (n=532 standardized patient evaluations).\n\n\nRESULTS\nThe 23 symptom checkers provided the correct diagnosis first in 34% (95% confidence interval 31% to 37%) of standardized patient evaluations, listed the correct diagnosis within the top 20 diagnoses given in 58% (55% to 62%) of standardized patient evaluations, and provided the appropriate triage advice in 57% (52% to 61%) of standardized patient evaluations. Triage performance varied by urgency of condition, with appropriate triage advice provided in 80% (95% confidence interval 75% to 86%) of emergent cases, 55% (47% to 63%) of non-emergent cases, and 33% (26% to 40%) of self care cases (P<0.001). Performance on appropriate triage advice across the 23 individual symptom checkers ranged from 33% (95% confidence interval 19% to 48%) to 78% (64% to 91%) of standardized patient evaluations.\n\n\nCONCLUSIONS\nSymptom checkers had deficits in both triage and diagnosis. Triage advice from symptom checkers is generally risk averse, encouraging users to seek care for conditions where self care is reasonable.", "title": "" }, { "docid": "612f35c7a84177440da5a3dea9d33ad3", "text": "Anglican is a probabilistic programming system designed to interoperate with Clojure and other JVM languages. We introduce the programming language Anglican, outline our design choices, and discuss in depth the implementation of the Anglican language and runtime, including macro-based compilation, extended CPS-based evaluation model, and functional representations for probabilistic paradigms, such as a distribution, a random process, and an inference algorithm.\n We show that a probabilistic functional language can be implemented efficiently and integrated tightly with a conventional functional language with only moderate computational overhead. We also demonstrate how advanced probabilistic modelling concepts are mapped naturally to the functional foundation.", "title": "" }, { "docid": "22fc1e303a4c2e7d1e5c913dca73bd9e", "text": "The artificial potential field (APF) approach provides a simple and effective motion planning method for practical purpose. However, artificial potential field approach has a major problem, which is that the robot is easy to be trapped at a local minimum before reaching its goal. The avoidance of local minimum has been an active research topic in path planning by potential field. In this paper, we introduce several methods to solve this problem, emphatically, introduce and evaluate the artificial potential field approach with simulated annealing (SA). As one of the powerful techniques for escaping local minimum, simulated annealing has been applied to local and global path planning", "title": "" } ]
scidocsrr
54f7188ab84e5c3d77d3d66aa0980b60
A subject transfer framework for EEG classification
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" } ]
[ { "docid": "06e3d228e9fac29dab7180e56f087b45", "text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.", "title": "" }, { "docid": "2960e702b0c764de558a2f723c13196a", "text": "The main information of a webpage is usually mixed between menus, advertisements, panels, and other not necessarily related information; and it is often difficult to automatically isolate this information. This is precisely the objective of content extraction, a research area of widely interest due to its many applications. Content extraction is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems that need to extract the main content in a web document to avoid the treatment and processing of other useless information. Other interesting application where content extraction is particularly used is displaying webpages in small screens such as mobile phones or PDAs. In this work we present a new technique for content extraction that uses the DOM tree of the webpage to analyze the hierarchical relations of the elements in the webpage. Thanks to this information, the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags), but it also gives us a very precise information regarding the related components in a block, thus, producing very cohesive blocks.", "title": "" }, { "docid": "7034f49fe75a9152a4e6849d2aacdc0b", "text": "Sustained hepatic inflammation is an important factor in progression of chronic liver diseases, including hepatitis C or non-alcoholic steatohepatitis. Liver inflammation is regulated by chemokines, which regulate the migration and activities of hepatocytes, Kupffer cells, hepatic stellate cells, endothelial cells, and circulating immune cells. However, the effects of the different chemokines and their receptors vary during pathogenesis of different liver diseases. During development of chronic viral hepatitis, CCL5 and CXCL10 regulate the cytopathic versus antiviral immune responses of T cells and natural killer cells. During development of nonalcoholic steatohepatitis, CCL2 and its receptor are up-regulated in the liver, where they promote macrophage accumulation, inflammation, fibrosis, and steatosis, as well as in adipose tissue. CCL2 signaling thereby links hepatic and systemic inflammation related to metabolic disorders and insulin resistance. Several chemokine signaling pathways also promote hepatic fibrosis. Recent studies have shown that other chemokines and immune cells have anti-inflammatory and antifibrotic activities. Chemokines and their receptors can also contribute to the pathogenesis of hepatocellular carcinoma, promoting proliferation of cancer cells, the inflammatory microenvironment of the tumor, evasion of the immune response, and angiogenesis. We review the roles of different chemokines in the pathogenesis of liver diseases and their potential use as biomarkers or therapeutic targets.", "title": "" }, { "docid": "0aab03fe46d4f04b2bb8d10fa32ce049", "text": "Nowadays, World Wide Web (WWW) surfing is becoming a risky task with the Web becoming rich in all sorts of attack. Websites are the main source of many scams, phishing attacks, identity theft, SPAM commerce and malware. Nevertheless, browsers, blacklists, and popup blockers are not enough to protect users. According to this, fast and accurate systems still to be needed with the ability to detect new malicious content. By taking into consideration, researchers have developed various Malicious Website detection techniques in recent years. Analyzing those works available in the literature can provide good knowledge on this topic and also, it will lead to finding the recent problems in Malicious Website detection. Accordingly, I have planned to do a comprehensive study with the literature of Malicious Website detection techniques. To categorize the techniques, all articles that had the word “malicious detection” in its title or as its keyword published between January 2003 to august 2016, is first selected from the scientific journals: IEEE, Elsevier, Springer and international journals. After the collection of research articles, we discuss every research paper. In addition, this study gives an elaborate idea about malicious detection.", "title": "" }, { "docid": "4731a95b14335a84f27993666b192bba", "text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.", "title": "" }, { "docid": "7a2525b0f2225167b57d86ab034bb992", "text": "The goal of this project is to apply multilayer feedforward neural networks to phishing email detection and evaluate the effectiveness of this approach. We design the feature set, process the phishing dataset, and implement the neural network (NN) systems. We then use cross validation to evaluate the performance of NNs with different numbers of hidden units and activation functions. We also compare the performance of NNs with other major machine learning algorithms. From the statistical analysis, we conclude that NNs with an appropriate number of hidden units can achieve satisfactory accuracy even when the training examples are scarce. Moreover, our feature selection is effective in capturing the characteristics of phishing emails, as most machine learning algorithms can yield reasonable results with it.", "title": "" }, { "docid": "e306d50838fc5e140a8c96cd95fd3ca2", "text": "Customer Relationship Management (CRM) is a strategy that supports an organization’s decision-making process to retain long-term and profitable relationships with its customers. Effective CRM analyses require a detailed data warehouse model that can support various CRM analyses and deep understanding on CRM-related business questions. In this paper, we present a taxonomy of CRM analysis categories. Our CRM taxonomy includes CRM strategies, CRM category analyses, CRM business questions, their potential uses, and key performance indicators (KPIs) for those analysis types. Our CRM taxonomy can be used in selecting and evaluating a data schema for CRM analyses, CRM vendors, CRM strategies, and KPIs.", "title": "" }, { "docid": "9f81f1d90b2131b5355747f674e65ea6", "text": "With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.", "title": "" }, { "docid": "1d356c920fb720252d827164752dffe5", "text": "In the early days of machine learning, Donald Michie introduced two orthogonal dimensions to evaluate performance of machine learning approaches – predictive accuracy and comprehensibility of the learned hypotheses. Later definitions narrowed the focus to measures of accuracy. As a consequence, statistical/neuronal approaches have been favoured over symbolic approaches to machine learning, such as inductive logic programming (ILP). Recently, the importance of comprehensibility has been rediscovered under the slogan ‘explainable AI’. This is due to the growing interest in black-box deep learning approaches in many application domains where it is crucial that system decisions are transparent and comprehensible and in consequence trustworthy. I will give a short history of machine learning research followed by a presentation of two specific approaches of symbolic machine learning – inductive logic programming and end-user programming. Furthermore, I will present current work on explanation generation. Die Arbeitsweise der Algorithmen, die über uns entscheiden, muss transparent gemacht werden, und wir müssen die Möglichkeit bekommen, die Algorithmen zu beeinflussen. Dazu ist es unbedingt notwendig, dass die Algorithmen ihre Entscheidung begründen! Peter Arbeitsloser zu John of Us, Qualityland, Marc-Uwe Kling, 2017", "title": "" }, { "docid": "8cbc15b5e5c957f464573e52f00f2924", "text": "Tennis is one of the most popular sports in the world. Many researchers have studied in tennis model to find out whose player will be the winner of the match by using the statistical data. This paper proposes a powerful technique to predict the winner of the tennis match. The proposed method provides more accurate prediction results by using the statistical data and environmental data based on Multi-Layer Perceptron (MLP) with back-propagation learning algorithm.", "title": "" }, { "docid": "25a13be77dad25b6ae40f1533dcbfc18", "text": "Using an underlying role-based model for the administration of roles has proved itself to be a successful approach. This paper sets out to describe the enterprise role-based access control model (ERBAC) in the context of SAM Jupiter, a commercial enterprise security management software.We provide an overview of the role-based conceptual model underlying SAM Jupiter. Having established this basis, we describe how the model is used to facilitate a role-based administration approach. In particular, we discuss our notion of 'scopes', which describe the objects over which an administrator has authority. The second part provides a case study based on our real-world experiences in the implementation of role-based administrative infrastructures. Finally, a critical evaluation and comparison with current approaches to administrative role-based access control is provided.", "title": "" }, { "docid": "bff3126818b6fd9a91eba7aa6683ca72", "text": "Several fundamental security mechanisms for restricting access to network resources rely on the ability of a reference monitor to inspect the contents of traffic as it traverses the network. However, with the increasing popularity of cryptographic protocols, the traditional means of inspecting packet contents to enforce security policies is no longer a viable approach as message contents are concealed by encryption. In this paper, we investigate the extent to which common application protocols can be identified using only the features that remain intact after encryption—namely packet size, timing, and direction. We first present what we believe to be the first exploratory look at protocol identification in encrypted tunnels which carry traffic from many TCP connections simultaneously, using only post-encryption observable features. We then explore the problem of protocol identification in individual encrypted TCP connections, using much less data than in other recent approaches. The results of our evaluation show that our classifiers achieve accuracy greater than 90% for several protocols in aggregate traffic, and, for most protocols, greater than 80% when making fine-grained classifications on single connections. Moreover, perhaps most surprisingly, we show that one can even estimate the number of live connections in certain classes of encrypted tunnels to within, on average, better than 20%.", "title": "" }, { "docid": "866264eca32bf5d215d975d938ba6bfc", "text": "Mammography is the most widely used method to screen breast cancer. Because of its mostly manual nature, variability in mass appearance, and low signal-to-noise ratio, a significant number of breast masses are missed or misdiagnosed. In this work, we present how Convolutional Neural Networks can be used to directly classify pre-segmented breast masses in mammograms as benign or malignant, using a combination of transfer learning, careful pre-processing and data augmentation to overcome limited training data. We achieve state-of-the-art results on the DDSM dataset, surpassing human performance, and show interpretability of our model.", "title": "" }, { "docid": "b11db1c4c82e2e0c626945223ec07f68", "text": "3D printing has become one of the most popular evolutionary techniques with diverse application, even as normal people's hobby. As printing target, enormous 3D virtual models from game industry and virtual reality flood the internet, shared in various online forums, such as thingiverse. But which of them can really be printed? In this paper we propose the 3D Printability Checker, which can be used to automatically answer this non-trivial question. One of the major novelties of this paper is the process of dependable software engineering we use to build this Printability Checker. Firstly, we prove that this question is decidable with a given 3D object and a list of printer profiles. Secondly, we design and implement such a checker. Finally, we show our experimental results and use them further for a machine learning approach to improve our system in an automatic way. The generic framework provides a useful basis of automatic self-improvement of the software by combining current techniques in the area of formal method, geometry modelling and machine learning.", "title": "" }, { "docid": "472519682e5b086732b31e558ec7934d", "text": "As networks become ubiquitous in people's lives, users depend on networks a lot for sufficient communication and convenient information access. However, networks suffer from security issues. Network security becomes a challenging topic since numerous new network attacks have appeared increasingly sophisticated and caused vast loss to network resources. Game theoretic approaches have been introduced as a useful tool to handle those tricky network attacks. In this paper, we review the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement. Moreover, we present a brief view of the game models in those solutions and summarize them into two categories, cooperative game models and non-cooperative game models with the latter category consisting of subcategories. In addition to the introduction to the state of the art, we discuss the limitations of those game theoretic approaches and propose future research directions.", "title": "" }, { "docid": "d256724f6f0c09f8c33c93d800b4010a", "text": "Cyber-physical control systems involve a discrete computational algorithm to control continuous physical systems. Often the control algorithm uses predictive models of the physical system in its decision making process. However, physical system models suffer from several inaccuracies when employed in practice. Mitigating such inaccuracies is often difficult and have to be repeated for different instances of the physical system. In this paper, we propose a model guided deep learning method for extraction of accurate prediction models of physical systems, in presence of artifacts observed in real life deployments. Given an initial potentially suboptimal mathematical prediction model, our model guided deep learning method iteratively improves the model through a data driven training approach. We apply the proposed approach on the closed loop blood glucose control system. Using this proposed approach, we achieve an improvement over predictive Bergman Minimal Model by a factor of around 100.", "title": "" }, { "docid": "9ca4543f4943a1679b639caa186f1650", "text": "SHAPE ADJECTIVE COLOR DISEASE TEXT NARRATIVE* GENERAL-INFO DEFINITION USE EXPRESSION-ORIGIN HISTORY WHY-FAMOUS BIO ANTECEDENT INFLUENCE CONSEQUENT CAUSE-EFFECT METHOD-MEANS CIRCUMSTANCE-MEANS REASON EVALUATION PRO-CON CONTRAST RATING COUNSEL-ADVICE To create the QA Typology, we analyzed 17,384 questions and their answers (downloaded from answers.com); see (Gerber, 2001). The Typology contains 94 nodes, of which 47 are leaf nodes; a section of it appears in Figure 2. Each Typology node has been annotated with examples and typical patterns of expression of both Question and Answer, as indicated in Figure 3 for Proper-Person. Question examples Question templates Who was Johnny Mathis' high school track coach? who be <entity>'s <role> Who was Lincoln's Secretary of State? Who was President of Turkmenistan in 1994? who be <role> of <entity> Who is the composer of Eugene Onegin? Who is the CEO of General Electric? Actual answers Answer templates Lou Vasquez, track coach of...and Johnny Mathis <person>, <role> of <entity> Signed Saparmurad Turkmenbachy [Niyazov], <person> <role-title*> of <entity> president of Turkmenistan ...Turkmenistan’s President Saparmurad Niyazov... <entity>’s <role> <person> ...in Tchaikovsky's Eugene Onegin... <person>'s <entity> Mr. Jack Welch, GE chairman... <role-title> <person> ... <entity> <role> ...Chairman John Welch said ...GE's <subject>|<psv object> of related role-verb Figure 3. Portion of QA Typology node annotations for Proper-Person.", "title": "" }, { "docid": "00337220cd594074fa303d727071a2ff", "text": "INTRODUCTION\nIn the present era, thesauri as tools in indexing play an effective role in integrating retrieval preventing fragmentation as well as a multiplicity of terminologies and also in providing information content of documents.\n\n\nGOALS\nThis study aimed to investigate the keywords of articles indexed in IranMedex in terms of origin, structure and indexing situation and their Compliance with the Persian Medical Thesaurus and Medical Subject Headings (MeSH).\n\n\nMATERIALS AND METHODS\nThis study is an applied research, and a survey has been conducted. Statistical population includes 32,850 Persian articles which are indexed in the IranMedex during the years 1385-1391. 379 cases were selected as sample of the study. Data collection was done using a checklist. In analyzing the findings, the SPSS Software were used.\n\n\nFINDINGS\nAlthough there was no significant difference in terms of indexing origin between the proportion of different types of the Persian and English keywords of articles indexed in the IranMedex, the compliance rates of the Persian and English keywords with the Persian medical thesaurus and MeSH were different in different years. In the meantime, the structure of keywords is leaning more towards phrase structure, and a single word structure and the majority of keywords are selected from the titles and abstracts.\n\n\nCONCLUSION\nThe authors' familiarity with the thesauri and controlled tools causes homogeneity in assigning keywords and also provides more precise, faster, and easier retrieval of the keywords. It's suggested that a mixture of natural and control languages to be used in this database in order to reach more comprehensive results.", "title": "" }, { "docid": "8c2d6aac36ea2c10463ad05fc5f9b854", "text": "Motion planning plays a key role in autonomous driving. In this work, we introduce the combinatorial aspect of motion planning which tackles the fact that there are usually many possible and locally optimal solutions to accomplish a given task. Those options we call maneuver variants. We argue that by partitioning the trajectory space into discrete solution classes, such that local optimization methods yield an optimum within each discrete class, we can improve the chance of finding the global optimum as the optimum trajectory among the manuever variants. This work provides methods to enumerate the maneuver variants as well as constraints to enforce them. The return of the effort put into the problem modification as suggested is gaining assuredness in the convergency behaviour of the optimization algorithm. We show an experiment where we identify three local optima that would not have been found with local optimization methods.", "title": "" }, { "docid": "87737f028cf03a360a3e7affe84c9bc9", "text": "This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV) models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made on selected CLV models: Extended Pareto/NBD model (EP/NBD), Markov chain model and Status Quo model. The article uses six online store datasets with annual revenues in the order of tens of millions of euros for the comparison. The EP/NBD model has outperformed other selected models in a majority of evaluation metrics and can be considered good and stable for non-contractual relations in online shopping. The implications for the deployment of selected CLV models in practice, as well as suggestions for future research, are also discussed.", "title": "" } ]
scidocsrr
2ae6c1a308ed0ac2d9293d8374507c40
A high performance FPGA-based accelerator for large-scale convolutional neural networks
[ { "docid": "5c8c391a10f32069849d743abc5e8210", "text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.", "title": "" } ]
[ { "docid": "5945081c099c883d238dca2a1dfc821e", "text": "Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5 % of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.", "title": "" }, { "docid": "5e5574605d6e4573098028b98c45923e", "text": "In this paper we study the role of trust in enhancing asymmetric partnership formation. First we briefly review the role of trust. Then we analyze the state-of-the-art of the theoretical and empirical literature on trust creation and antecedents for experienced trustworthiness. As a result of the literature review and our knowledge of the context in praxis, we create a model on organizational trust building where the interplay of inter-organizational and inter-personal trust is scrutinized. Potential challenges for our model are first the asymmetry of organizations and actors and secondly the volatility of the business. The opportunity window for partnering firms may be very short i.e. there is not much time for natural development of trust based on incremental investments and social or character similarity, but so called “fast” or “swift” trust is needed. As a managerial contribution we suggest some practices and processes, which could be used for organizational trust building. These are developed from the viewpoint of large organization boundary-spanners (partner/vendor managers) developing asymmetric technology partnerships. Leveraging Complementary Benefits in a Telecom Network Individual specialization and organizational focus on core competencies leads to deep but narrow competencies. Thus complementary knowledge, resources and skills are needed. Ståhle (1998, 85 and 86) explains the mutual interdependence of individuals in a system by noting that actors always belong to social systems, but they may actualize only by relating to others. In order to transfer knowledge and learn social actors need to be able to connect and for that they need to build trust. Also according to Luhmann (1995, 112) each system first tests the bond of trust and only then starts processing the meaning. In line with Arrow (1974) we conclude that ability to build trust is a necessary (even if not sufficient) precondition to relationships in a social system (network). 1 Conceptualized also as “double contingency” (Luhmann 1995, 118). In telecommunications the asymmetric technology partnerships between large incumbent players and specialized suppliers are increasingly common. Technological development and the convergence of information technology, telecommunications and media industry has created potential business areas, where knowledge of complementary players is needed. Complementary capabilities often mean asymmetric partnerships, where partnering firms have different skills, resources and knowledge. Perceived or believed dissimilarities in values, goals, time-horizon, decision-making processes, culture and logic of strategy imply barriers for cooperation to evolve (Doz 1988, Blomqvist 1999). A typical case is a partnership with a large and incumbent telecommunications firm and a small software supplier. The small software firm supplies the incumbent firm with state-of-the-art innovative service applications, which complement the incumbent firm’s platform. Risk and trust are involved in every transaction where the simultaneous exchange is unavailable (Arrow 1973, 24). Companies engaged in a technology partnership exchange and share valuable information, which may not be safeguarded by secrecy agreements. Various types of risks, e.g. failures in technology development, performance or market risk or unintended disclosure of proprietary information and partner's opportunistic behavior in e.g. absorbing and imitating the technology or recruiting key persons are present. Building trust is particularly important for complementary parties to reach the potential network benefits of scale and scope, yet tedious due to asymmetric characteristics. Natural trust creation is constrained as personal and process sources of trust (Zucker 1986) are limited due to partners’ different cultures and short experience from interaction. In organizational relationships the basis of trust must be extended beyond personal and individual relationships (Creed and Miles 1996, Hardy et. al. 1998). In asymmetric technology partnerships the dominant large partner may be tempted to use power to ensure control and authority. Hardy et al. (1998, 82) discuss a potential capitulation of a dependent partner in an asymmetric relationship. This means that the subordinate organization loses its ability to operate in full as a result of anticipated reactions from a more powerful organization. Therefore, as an expected source for spear-edge innovations, it fails to realize its potential in full. Thus the potential for dominant players to leverage the “synergistic creativity” of specialized suppliers realizes only through double-contingency relationships characterized by mutual interdependency and equity (Luhmann 1995). Such relationships may leverage the innovative abilities of small and specialized suppliers, but only if asymmetric partners are able to build organizational trust and subsequently connect with each other. In the telecommunications both the technological and market uncertainty are high. Considerable rewards may be gained, yet the players face considerable risks. There is little time to study the volatile markets or learn the constantly emerging new technologies. In such a turbulent business the players are forced to constant strategizing. Partnerships may have to be decided almost “overnight” and many are of temporary nature. Players in the volatile telecommunications also know that the “shadow-of-the-future” might be surprisingly short, since the various alliances and consortiums are in constant move. Previous research on trust shows that trust develops gradually and common future is a strong motivator for a trusting relationship (e.g. Axelrod 1984). In telecommunications the partnering firms need trust more 2 By asymmetry is meant a non-symmetrical situation between actors. Economists discuss asymmetrical information leading to potential opportunism. Another theme related commonly to asymmetry is power, which is closely linked to company size. In asymmetric technology partnerships asymmetry manifests in different corporate cultures, management and type of resources. In this context asymmetry could be defined as “difference in knowledge, power and culture of actors”. than ever, yet they have little chance to commit themselves gradually to the relationship or experiment the values and goals of the other. Due to great risks the ability to build trust is crucial, yet because of the high volatility and short shadow-of-the future especially challenging. Building trust Trust is seen as a necessary antecedent for cooperation (Axelrod 1984) and leading to constructive and cooperative behavior vital for long-term relationships (Barney 1981, Morgan and Hunt 1994). Trust is vital for both innovative work within the organization in e.g. project teams (Jones and George 1998) and between organizations e.g. strategic alliances (Doz 1999, Zaheer et al. 1998) and R & D partnerships (Dodgson 1993). In this paper trust is defined as \"actor's expectation of the other party's competence, goodwill and behavior\". It is believed that in business context both competence and goodwill levels are needed for trust to develop (Blomqvist 1997). The relevant competence (technical capabilities, skills and knowhow) is a necessary antecedent and base for trust in professional relationships of business context. Especially so in the technology partnership where potential partners are assumed to have technological knowledge and competencies. Signs of goodwill (moral responsibility and positive intentions toward the other) are also necessary for the trusting party to be able to accept a potentially vulnerable position (risk inherent). Positive intentions appear as signs of cooperation and partner’s proactive behavior. Competence Goodwill Goodwill Competence Behavior Behavior Figure 1. Development of trust through layers of trustworthiness Bidault and Jarillo (1997) have added a third dimension to trust i.e. the actual behavior of parties. Goodwill-dimension of trust includes positive intentions toward the other, but along time, when the relationship is developing, the actual behavior e.g. that the trustee fulfills the positive intentions enhances trustworthiness (see Figure 1). Already at the very first meetings the behavioral dimension is present in signs and signals, e.g. what information is revealed and in which manner. In the partnering process (along time) the actual behavior e.g. kept promises become more visible and easier to evaluate. Role of trust has been studied quite extensively and in different contexts (e.g. Larson 1992, Swan 1995, Sydow 1998, Morgan and Hunt 1994, O’Brien 1995). Development of personal trust has been studied among psychologists and socio-psychologists (Deutch 1958, Blau 1966, Rotter 1967 and Good 1988). Development of organizational trust has been studied much less (Halinen 1994, Das and Teng 1998). In this paper we attempt to model interorganizational trust building and suggest some managerial tools to build trust. We build on Anthony Giddens (1984) theory of structuration and a model on experiencing trust by Jones and George (1998). According to social exchange theory (Blau 1966, Whitener et al. 1998 among others) information, advice, social support and recognition are important means in trust building, which is created by repeated interactions and reciprocity. A different view to trust is offered by agency theory developed by economists and focussing in the relationship between principals and agents (e.g. employer and employee). According to agency theory relationship management, e.g. socialization of corporate values, policies and industry norms (e.g. Eisenhardt 1985, 135 and 148) may control moral hazard inherent in such relationships. Researchers disagree whether trust can be intentionally created. According to Sydow (1998) trust is very difficult to develop and sustain. It is however believed that the conditions (processes, routines and settings) affectin", "title": "" }, { "docid": "36b2ce9d30b2fc98d7c3f98b94cc0b4e", "text": "Efficient energy management in residential areas is a key issue in modern energy systems. In this scenario, induction heating (IH) becomes an alternative to classical heating technologies because of its advantages such as efficiency, quickness, safety, and accurate power control. In this article, the design of modern flexible cooking surfaces featuring IH technology is presented. The main advantages and technical challenges are given, and the design of the inductor system and the power electronic converter is detailed. The feasibility of the proposed system is verified through a laboratory prototype.", "title": "" }, { "docid": "5762adf6fc9a0bf6da037cdb10191400", "text": "Graphics Processing Unit (GPU) virtualization is an enabling technology in emerging virtualization scenarios. Unfortunately, existing GPU virtualization approaches are still suboptimal in performance and full feature support. This paper introduces gVirt, a product level GPU virtualization implementation with: 1) full GPU virtualization running native graphics driver in guest, and 2) mediated pass-through that achieves both good performance and scalability, and also secure isolation among guests. gVirt presents a virtual full-fledged GPU to each VM. VMs can directly access performance-critical resources, without intervention from the hypervisor in most cases, while privileged operations from guest are trap-and-emulated at minimal cost. Experiments demonstrate that gVirt can achieve up to 95% native performance for GPU intensive workloads, and scale well up to 7 VMs.", "title": "" }, { "docid": "c588af91f9a0c1ae59a355ce2145c424", "text": "Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’ outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency. Crown Copyright 2013 Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "ba4756657e75c3d279168f4a2becdf67", "text": "How does an individual use the knowledge acquired through self exploration as a manipulable model through which to understand others and benefit from their knowledge? How can developmental and social learning be combined for their mutual benefit? In this paper we review a hierarchical architecture (HAMMER) which allows a principled way for combining knowledge through exploration and knowledge from others, through the creation and use of multiple inverse and forward models. We describe how Bayesian Belief Networks can be used to learn the association between a robot’s motor commands and sensory consequences (forward models), and how the inverse association can be used for imitation. Inverse models created through self exploration, as well as those from observing others can coexist and compete in a principled unified framework, that utilises the simulation theory of mind approach to mentally rehearse and understand the actions of others.", "title": "" }, { "docid": "362e67a4e5320cde250165463b940417", "text": "Active du/dt is a new output-filtering method to mitigate motor overvoltages. The inverter pulse pattern edges are broken down into narrower pulses, which control the filter LC circuit. This results in an output voltage that does not have to exhibit the overshoot typically seen in common LC circuits in output-filtering applications. Furthermore, the shape of the output-voltage edge has properties well suited for output-filtering applications. An appropriate filter rise time is selected according to the motor-cable length to eliminate the motor overvoltage. The basis of the active du/dt method is discussed in brief. Considerations on the application of the active du/dt filtering in electric drives are presented together with simulations and experimental data to verify the potential of the method.", "title": "" }, { "docid": "194cab3122ed24c4e543c3fc8557b0fa", "text": "There is a huge advancement in Computer networking in the past decade. But with the advancement, the threats to the computer networks are also increased. Today one of the biggest threats to the computer networks is the Distributed Denial of Service (DDoS) flooding attack. This paper emphasizes the application layer DDoS flooding attacks because these (layer seven) attacks are growing rapidly and becoming more severe problem. Many researchers used machine-learning techniques for intrusion detection, but some shows poor detection and some methods take more training time. From a survey, it is found that Naïve Bayes (NB) algorithm provides faster learning/training speed than other machine learning algorithms. Also it has more accuracy in classification and detection of attack. So we are proposing a network intrusion detection system (IDS) which uses a machine learning approach with the help of NB algorithm.", "title": "" }, { "docid": "bd16035a3f4857f69afc06fad10a2d2f", "text": "Although opinion spam (or fake review) detection has attracted significant research attention in recent years, the problem is far from solved. One key reason is that there is no large-scale ground truth labeled dataset available for model building. Some review hosting sites such as Yelp.com and Dianping.com have built fake review filtering systems to ensure the quality of their reviews, but their algorithms are trade secrets. Working with Dianping, we present the first large-scale analysis of restaurant reviews filtered by Dianping’s fake review filtering system. Along with the analysis, we also propose some novel temporal and spatial features for supervised opinion spam detection. Our results show that these features significantly outperform existing state-ofart features.", "title": "" }, { "docid": "75368ca96b1b22d49b0601237031368d", "text": "We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open textbased manner. We do so by consolidating OIE extractions using entity and predicate coreference, while modeling information containment between coreferring elements via lexical entailment. We suggest that generating OKR structures can be a useful step in the NLP pipeline, to give semantic applications an easy handle on consolidated information across multiple texts.", "title": "" }, { "docid": "5dea819b5ec3884a28e1515987e7d4dd", "text": "An estimated 165 million children are stunted due to the combined effects of poor nutrition, repeated infection and inadequate psychosocial stimulation. The complementary feeding period, generally corresponding to age 6-24 months, represents an important period of sensitivity to stunting with lifelong, possibly irrevocable consequences. Interventions to improve complementary feeding practices or the nutritional quality of complementary foods must take into consideration the contextual as well as proximal determinants of stunting. This review presents a conceptual framework that highlights the role of complementary feeding within the layers of contextual and causal factors that lead to stunted growth and development and the resulting short- and long-term consequences. Contextual factors are organized into the following groups: political economy; health and health care systems; education; society and culture; agriculture and food systems; and water, sanitation and environment. We argue that these community and societal conditions underlie infant and young child feeding practices, which are a central pillar to healthy growth and development, and can serve to either impede or enable progress. Effectiveness studies with a strong process evaluation component are needed to identify transdisciplinary solutions. Programme and policy interventions aimed at preventing stunting should be informed by careful assessment of these factors at all levels.", "title": "" }, { "docid": "b54abd40f41235fa8e8cd4e9f42cd777", "text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "aee250663a05106c4c0fad9d0f72828c", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.", "title": "" }, { "docid": "9779a328b54e79a34191cec812ded633", "text": "We present a novel approach to computational modeling of social interactions based on modeling of essential social interaction predicates (ESIPs) such as joint attention and entrainment. Based on sound social psychological theory and methodology, we collect a new “Tower Game” dataset consisting of audio-visual capture of dyadic interactions labeled with the ESIPs. We expect this dataset to provide a new avenue for research in computational social interaction modeling. We propose a novel joint Discriminative Conditional Restricted Boltzmann Machine (DCRBM) model that combines a discriminative component with the generative power of CRBMs. Such a combination enables us to uncover actionable constituents of the ESIPs in two steps. First, we train the DCRBM model on the labeled data and get accurate (76%-49% across various ESIPs) detection of the predicates. Second, we exploit the generative capability of DCRBMs to activate the trained model so as to generate the lower-level data corresponding to the specific ESIP that closely matches the actual training data (with mean square error 0.01-0.1 for generating 100 frames). We are thus able to decompose the ESIPs into their constituent actionable behaviors. Such a purely computational determination of how to establish an ESIP such as engagement is unprecedented.", "title": "" }, { "docid": "e0f321777e64230f23a9e70f3c407872", "text": "Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: one shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on the Web, together with their fit with several baseline models, and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: the last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples. Merits and limitations of the competition method are discussed.", "title": "" }, { "docid": "98b4703412d1c8ccce22ea6fb05d73bf", "text": "Clinical evaluation of scapular dyskinesis (SD) aims to identify abnormal scapulothoracic movement, underlying causal factors, and the potential relationship with shoulder symptoms. The literature proposes different methods of dynamic clinical evaluation of SD, but improved reliability and agreement values are needed. The present study aimed to evaluate the intrarater and interrater agreement and reliability of three SD classifications: 1) 4-type classification, 2) Yes/No classification, and 3) scapular dyskinesis test (SDT). Seventy-five young athletes, including 45 men and 30 women, were evaluated. Raters evaluated the SD based on the three methods during one series of 8-10 cycles (at least eight and maximum of ten) of forward flexion and abduction with an external load under the observation of two raters trained to diagnose SD. The evaluation protocol was repeated after 3 h for intrarater analysis. The agreement percentage was calculated by dividing the observed agreement by the total number of observations. Reliability was calculated using Cohen Kappa coefficient, with a 95% confidence interval (CI), defined by Kappa coefficient ±1.96 multiplied by the measurement standard error. The interrater analyses showed an agreement percentage between 80% and 95.9% and an almost perfect reliability (k>0.81) for the three classification methods in all the test conditions, except the 4-type and SDT classification methods, which had substantial reliability (k<0.80) in shoulder abduction. Intrarater analyses showed agreement percentages between 80.7% and 89.3% and substantial reliability (0.67 to 0.81) for both raters in the three classifications. CIs ranged from moderate to almost perfect categories. This indicates that the three SD classification methods investigated in this study showed high reliability values for both intrarater and interrater evaluation throughout a protocol that provided SD evaluation training of raters and included several repetitions of arm movements with external load during a live assessment.", "title": "" }, { "docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8", "text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.", "title": "" }, { "docid": "759bf80a33903899cb7f684aa277eddd", "text": "Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.", "title": "" }, { "docid": "11212d5474184c1dc549c8cadc023e43", "text": "Videoconferencing is going to become attractive for geo-graphically distributed team collaboration, specifically to avoid travelling and to increase flexibility. Against this background this paper presents a next generation system - a 3D videoconference providing immersive tele-presence and natural representation of all participants in a shared virtual meeting space to enhance quality of human-centred communication. This system is based on the principle of a shared virtual table environment, which guarantees correct eye contact and gesture reproduction. The key features of our system are presented and compared to other approaches like tele-cubicles. Furthermore the current system design and details of the real-time hardware and software concept are explained.", "title": "" }, { "docid": "ca8c13c0a7d637234460f20caaa15df5", "text": "This paper presents a nonlinear control law for an automobile to autonomously track a trajectory, provided in real-time, on rapidly varying, off-road terrain. Existing methods can suffer from a lack of global stability, a lack of tracking accuracy, or a dependence on smooth road surfaces, any one of which could lead to the loss of the vehicle in autonomous off-road driving. This work treats automobile trajectory tracking in a new manner, by considering the orientation of the front wheels - not the vehicle's body - with respect to the desired trajectory, enabling collocated control of the system. A steering control law is designed using the kinematic equations of motion, for which global asymptotic stability is proven. This control law is then augmented to handle the dynamics of pneumatic tires and of the servo-actuated steering wheel. To control vehicle speed, the brake and throttle are actuated by a switching proportional integral (PI) controller. The complete control system consumes a negligible fraction of a computer's resources. It was implemented on a Volkswagen Touareg, \"Stanley\", the Stanford Racing Team's entry in the DARPA Grand Challenge 2005, a 132 mi autonomous off-road race. Experimental results from Stanley demonstrate the ability of the controller to track trajectories between obstacles, over steep and wavy terrain, through deep mud puddles, and along cliff edges, with a typical root mean square (RMS) crosstrack error of under 0.1 m. In the DARPA National Qualification Event 2005, Stanley was the only vehicle out of 40 competitors to not hit an obstacle or miss a gate, and in the DARPA Grand Challenge 2005 Stanley had the fastest course completion time.", "title": "" } ]
scidocsrr
67881ff8335ae3b578765af409ddf979
Mastering 2048 with Delayed Temporal Coherence Learning, Multi-Stage Weight Promotion, Redundant Encoding and Carousel Shaping
[ { "docid": "1dff8a1fae840411defec05db479040c", "text": "This paper investigates the use of n-tuple systems as position value functions for the game of Othello. The architecture is described, and then evaluated for use with temporal difference learning. Performance is compared with prev iously developed weighted piece counters and multi-layer perceptrons. The n-tuple system is able to defeat the best performing of these after just five hundred ga mes of selfplay learning. The conclusion is that n-tuple networks learn faster and better than the other more conventional approaches.", "title": "" } ]
[ { "docid": "a880d38d37862b46dc638b9a7e45b6ee", "text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.", "title": "" }, { "docid": "50ec9d25a24e67481a4afc6a9519b83c", "text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.", "title": "" }, { "docid": "ed0b269f861775550edd83b1eb420190", "text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.", "title": "" }, { "docid": "98b1965e232cce186b9be4d7ce946329", "text": "Currently existing dynamic models for a two-wheeled inverted pendulum mobile robot have some common mistakes. In order to find where the errors of the dynamic model are induced, Lagrangian method and Kane's method are compared in deriving the equation of motion. Numerical examples are given to illustrate the effect of the incorrect terms. Finally, a complete dynamic model is proposed without any error and missing terms.", "title": "" }, { "docid": "673c0d74b0df4cfe698d1a7397fc1365", "text": "The intense growth of Internet of Things (IoTs), its multidisciplinary nature and broadcasting communication pattern made it very challenging for research community/domain. Operating systems for IoTs plays vital role in this regard. Through this research contribution, the objective is to present an analytical study on the recent developments on operating systems specifically designed or fulfilled the needs of IoTs. Starting from study and advances in the field of IoTs with focus on existing operating systems specifically for IoTs. Finally the existing operating systems for IoTs are evaluated and compared on some set criteria and facts and findings are presented.", "title": "" }, { "docid": "5980e6111c145db3e1bfc5f47df7ceaf", "text": "Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.", "title": "" }, { "docid": "dbc3355eb2b88432a4bd21d42c090ef1", "text": "With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project. Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor", "title": "" }, { "docid": "9ae6f2f858bf613760718688be947c55", "text": "We propose a neural multi-document summarization (MDS) system that incorporates sentence relation graphs. We employ a Graph Convolutional Network (GCN) on the relation graphs, with sentence embeddings obtained from Recurrent Neural Networks as input node features. Through multiple layer-wise propagation, the GCN generates high-level hidden sentence features for salience estimation. We then use a greedy heuristic to extract salient sentences while avoiding redundancy. In our experiments on DUC 2004, we consider three types of sentence relation graphs and demonstrate the advantage of combining sentence relations in graphs with the representation power of deep neural networks. Our model improves upon traditional graph-based extractive approaches and the vanilla GRU sequence model with no graph, and it achieves competitive results against other state-of-the-art multidocument summarization systems.", "title": "" }, { "docid": "1726729c32f43917802b902267769dda", "text": "The creation of micro air vehicles (MAVs) of the same general sizes and weight as natural fliers has spawned renewed interest in flapping wing flight. With a wingspan of approximately 15 cm and a flight speed of a few meters per second, MAVs experience the same low Reynolds number (10–10) flight conditions as their biological counterparts. In this flow regime, rigid fixed wings drop dramatically in aerodynamic performance while flexible flapping wings gain efficacy and are the preferred propulsion method for small natural fliers. Researchers have long realized that steady-state aerodynamics does not properly capture the physical phenomena or forces present in flapping flight at this scale. Hence, unsteady flow mechanisms must dominate this regime. Furthermore, due to the low flight speeds, any disturbance such as gusts or wind will dramatically change the aerodynamic conditions around the MAV. In response, a suitable feedback control system and actuation technology must be developed so that the wing can maintain its aerodynamic efficiency in this extremely dynamic situation; one where the unsteady separated flow field and wing structure are tightly coupled and interact nonlinearly. For instance, birds and bats control their flexible wings with muscle tissue to successfully deal with rapid changes in the flow environment. Drawing from their example, perhaps MAVs can use lightweight actuators in conjunction with adaptive feedback control to shape the wing and achieve active flow control. This article first reviews the scaling laws and unsteady flow regime constraining both biological and man-made fliers. Then a summary of vortex dominated unsteady aerodynamics follows. Next, aeroelastic coupling and its effect on lift and thrust are discussed. Afterwards, flow control strategies found in nature and devised by man to deal with separated flows are examined. Recent work is also presented in using microelectromechanical systems (MEMS) actuators and angular speed variation to achieve active flow control for MAVs. Finally, an explanation for aerodynamic gains seen in flexible versus rigid membrane wings, derived from an unsteady three-dimensional computational fluid dynamics model with an integrated distributed control algorithm, is presented. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e59eec639d7104a5038eaaefa69edd95", "text": "Learning the embedding for social media data has attracted extensive research interests as well as boomed a lot of applications, such as classification and link prediction. In this paper, we examine the scenario of a multimodal network with nodes containing multimodal contents and connected by heterogeneous relationships, such as social images containing multimodal contents (e.g., visual content and text description), and linked with various forms (e.g., in the same album or with the same tag). However, given the multimodal network, simply learning the embedding from the network structure or a subset of content results in sub-optimal representation. In this paper, we propose a novel deep embedding method, i.e., Attention-based Multi-view Variational Auto-Encoder (AMVAE), to incorporate both the link information and the multimodal contents for more effective and efficient embedding. Specifically, we adopt LSTM with attention model to learn the correlation between different data modalities, such as the correlation between visual regions and the specific words, to obtain the semantic embedding of the multimodal contents. Then, the link information and the semantic embedding are considered as two correlated views. A multi-view correlation learning based Variational Auto-Encoder (VAE) is proposed to learn the representation of each node, in which the embedding of link information and multimodal contents are integrated and mutually reinforced. Experiments on three real-world datasets demonstrate the superiority of the proposed model in two applications, i.e., multi-label classification and link prediction.", "title": "" }, { "docid": "a9242c3fca5a8ffdf0e03776b8165074", "text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.", "title": "" }, { "docid": "24ecf1119592cc5496dc4994d463eabe", "text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.", "title": "" }, { "docid": "105951b58d594fdb3a07e1adbb76dc5f", "text": "The “Prediction by Partial Matching” (PPM) data compression algorithm developed by Cleary and Witten is capable of very high compression rates, encoding English text in as little as 2.2 bits/character. Here it is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation. In particular, a variant is described that encodes and decodes at over 4 kbytes/s on a small workstation, and operates within a few hundred kilobytes of data space, but still obtains compression of about 2.4 bits/character on", "title": "" }, { "docid": "c2c994664e3aecff1ccb8d8feaf860e9", "text": "Hazard zones associated with LNG handling activities have been a major point of contention in recent terminal development applications. Debate has reflected primarily worst case scenarios and discussion of these. This paper presents results from a maximum credible event approach. A comparison of results from several models either run by the authors or reported in the literature is presented. While larger scale experimental trials will be necessary to reduce the uncertainty, in the interim a set of base cases are suggested covering both existing trials and credible and worst case events is proposed. This can assist users to assess the degree of conservatism present in quoted modeling approaches and model selections.", "title": "" }, { "docid": "0a2be958c7323d3421304d1613421251", "text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.", "title": "" }, { "docid": "e1ed9d36e7b84ce7dcc74ac5f684ea76", "text": "As integrated circuits (ICs) continue to have an overwhelming presence in our digital information-dominated world, having trust in their manufacture and distribution mechanisms is crucial. However, with ever-shrinking transistor technologies, the cost of new fabrication facilities is becoming prohibitive, pushing industry to make greater use of potentially less reliable foreign sources for their IC supply. The 2008 Computer Security Awareness Week (CSAW) Embedded Systems Challenge at the Polytechnic Institute of NYU highlighted some of the vulnerabilities of the IC supply chain in the form of a hardware hacking challenge. This paper explores the design and implementation of our winning entry.", "title": "" }, { "docid": "051fc43d9e32d8b9d8096838b53c47cb", "text": "Median filtering is a cornerstone of modern image processing and is used extensively in smoothing and de-noising applications. The fastest commercial implementations (e.g. in Adobe® Photoshop® CS2) exhibit O(r) runtime in the radius of the filter, which limits their usefulness in realtime or resolution-independent contexts. We introduce a CPU-based, vectorizable O(log r) algorithm for median filtering, to our knowledge the most efficient yet developed. Our algorithm extends to images of any bit-depth, and can also be adapted to perform bilateral filtering. On 8-bit data our median filter outperforms Photoshop's implementation by up to a factor of fifty.", "title": "" }, { "docid": "b41b14ed0091a06072629be78bec090b", "text": "The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively.", "title": "" }, { "docid": "c55de58c07352373570ec7d46c5df03d", "text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" } ]
scidocsrr
a10a30da37c030f4a51a82b422fadcd7
Code Design for Short Blocks: A Survey
[ { "docid": "545adbeb802c7f8a70390ecf424e7f58", "text": "We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.", "title": "" } ]
[ { "docid": "0d8c38444954a0003117e7334195cb00", "text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.", "title": "" }, { "docid": "f6fc0992624fd3b3e0ce7cc7fc411154", "text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" }, { "docid": "28530d3d388edc5d214a94d70ad7f2c3", "text": "In next generation wireless mobile networks, network virtualization will become an important key technology. In this paper, we firstly propose a resource allocation scheme for enabling efficient resource allocation in wireless network virtualization. Then, we formulate the resource allocation strategy as an optimization problem, considering not only the revenue earned by serving end users of virtual networks, but also the cost of leasing infrastructure from infrastructure providers. In addition, we develop an efficient alternating direction method of multipliers (ADMM)-based distributed virtual resource allocation algorithm in virtualized wireless networks. Simulation results are presented to show the effectiveness of the proposed scheme.", "title": "" }, { "docid": "26011dba6cc608e599f8393b2d2fc8be", "text": "Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network with a general pairwise ranking framework, in which two novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the impact of NR (not relation) for model training, which significantly boosts our model performance. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate that our model is effective to learn class ties. Our model outperforms baselines significantly, achieving state-of-the-art performance. The source code of this paper can be obtained from https://github.com/ yehaibuaa/DS_RE_DeepRanking.", "title": "" }, { "docid": "012bcbc6b5e7b8aaafd03f100489961c", "text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.", "title": "" }, { "docid": "350cda71dae32245b45d96b5fdd37731", "text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.", "title": "" }, { "docid": "faf53f190fe226ce14f32f9d44d551b5", "text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.", "title": "" }, { "docid": "4e6ff17d33aceaa63ec156fc90aed2ce", "text": "Objective:\nThe aim of the present study was to translate and cross-culturally adapt the Functional Status Score for the intensive care unit (FSS-ICU) into Brazilian Portuguese.\n\n\nMethods:\nThis study consisted of the following steps: translation (performed by two independent translators), synthesis of the initial translation, back-translation (by two independent translators who were unaware of the original FSS-ICU), and testing to evaluate the target audience's understanding. An Expert Committee supervised all steps and was responsible for the modifications made throughout the process and the final translated version.\n\n\nResults:\nThe testing phase included two experienced physiotherapists who assessed a total of 30 critical care patients (mean FSS-ICU score = 25 ± 6). As the physiotherapists did not report any uncertainties or problems with interpretation affecting their performance, no additional adjustments were made to the Brazilian Portuguese version after the testing phase. Good interobserver reliability between the two assessors was obtained for each of the 5 FSS-ICU tasks and for the total FSS-ICU score (intraclass correlation coefficients ranged from 0.88 to 0.91).\n\n\nConclusion:\nThe adapted version of the FSS-ICU in Brazilian Portuguese was easy to understand and apply in an intensive care unit environment.", "title": "" }, { "docid": "8e0badc0828019460da0017774c8b631", "text": "To meet the explosive growth in traffic during the next twenty years, 5G systems using local area networks need to be developed. These systems will comprise of small cells and will use extreme cell densification. The use of millimeter wave (Mmwave) frequencies, in particular from 20 GHz to 90 GHz, will revolutionize wireless communications given the extreme amount of available bandwidth. However, the different propagation conditions and hardware constraints of Mmwave (e.g., the use of RF beamforming with very large arrays) require reconsidering the modulation methods for Mmwave compared to those used below 6 GHz. In this paper we present ray-tracing results, which, along with recent propagation measurements at Mmwave, all point to the fact that Mmwave frequencies are very appropriate for next generation, 5G, local area wireless communication systems. Next, we propose null cyclic prefix single carrier as the best candidate for Mmwave communications. Finally, systemlevel simulation results show that with the right access point deployment peak rates of over 15 Gbps are possible at Mmwave along with a cell edge experience in excess of 400 Mbps.", "title": "" }, { "docid": "2bc86a02909f16ad0372a36dd92c954c", "text": "Multi-view learning is an emerging direction in machine learning which considers learning with multiple views to improve the generalization performance. Multi-view learning is also known as data fusion or data integration from multiple feature sets. Since the last survey of multi-view machine learning in early 2013, multi-view learning has made great progress and developments in recent years, and is facing new challenges. This overview first reviews theoretical underpinnings to understand the properties and behaviors of multi-view learning. Then multi-view learning methods are described in terms of three classes to offer a neat categorization and organization. For each category, representative algorithms and newly proposed algorithms are presented. The main feature of this survey is that we provide comprehensive introduction for the recent developments of multi-view learning methods on the basis of coherence with early methods. We also attempt to identify promising venues and point out some specific challenges which can hopefully promote further research in this rapidly developing field. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "95333e4206a3b4c1a576f452c591421f", "text": "Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.", "title": "" }, { "docid": "c74b93fff768f024b921fac7f192102d", "text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation", "title": "" }, { "docid": "7fc6ffb547bc7a96e360773ce04b2687", "text": "Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.", "title": "" }, { "docid": "5b149ce093d0e546a3e99f92ef1608a0", "text": "Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.", "title": "" }, { "docid": "cfebffcb4f0d082e7733c7c92c4a1700", "text": "While attacks on information systems have for most practical purposes binary outcomes (information was manipulated/eavesdropped, or not), attacks manipulating the sensor or control signals of Industrial Control Systems (ICS) can be tuned by the attacker to cause a continuous spectrum in damages. Attackers that want to remain undetected can attempt to hide their manipulation of the system by following closely the expected behavior of the system, while injecting just enough false information at each time step to achieve their goals. In this work, we study if attack-detection can limit the impact of such stealthy attacks. We start with a comprehensive review of related work on attack detection schemes in the security and control systems community. We then show that many of those works use detection schemes that are not limiting the impact of stealthy attacks. We propose a new metric to measure the impact of stealthy attacks and how they relate to our selection on an upper bound on false alarms. We finally show that the impact of such attacks can be mitigated in several cases by the proper combination and configuration of detection schemes. We demonstrate the effectiveness of our algorithms through simulations and experiments using real ICS testbeds and real ICS systems.", "title": "" }, { "docid": "8324dc0dfcfb845739a22fb9321d5482", "text": "In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discriminator and q(x) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions. 1", "title": "" }, { "docid": "73d4a47d4aba600b4a3bcad6f7f3588f", "text": "Humans can easily perform tasks that use vision and language jointly, such as describing a scene and answering questions about objects in the scene and how they are related. Image captioning and visual question & answer are two popular research tasks that have emerged from advances in deep learning and the availability of datasets that specifically address these problems. However recent work has shown that deep learning based solutions to these tasks are just as brittle as solutions for only vision or only natural language tasks. Image captioning is vulnerable to adversarial perturbations; novel objects, which are not described in training data, and contextual biases in training data can degrade performance in surprising ways. For these reasons, it is important to find ways in which general-purpose knowledge can guide connectionist models. We investigate challenges to integrate existing ontologies and knowledge bases with deep learning solutions, and possible approaches for overcoming such challenges. We focus on geo-referenced data such as geo-tagged images and videos that capture outdoor scenery. Geo-knowledge bases are domain specific knowledge bases that contain concepts and relations that describe geographic objects. This work proposes to increase the robustness of automatic scene description and inference by leveraging geo-knowledge bases along with the strengths of deep learning for visual object detection and classification.", "title": "" }, { "docid": "5aee510b62d8792a38044fc8c68a57e4", "text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.", "title": "" }, { "docid": "8f5ca5819dd28c686da78332add76fb0", "text": "The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.", "title": "" } ]
scidocsrr
a088d1605607f8ef3ad4a542fe796746
A Comparative Analysis of MRI Brain Tumor Segmentation Technique
[ { "docid": "57384df0c477dca29d4a572af32a1871", "text": "In this paper, a simple algorithm for detecting the range and shape of tumor in brain MR Images is described. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection and diagnosis of brain tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor based on the combination of two algorithms. This method allows the segmentation of tumor tissue with accuracy and reproducibility comparable to manual segmentation. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MR image and its exact position and the shape also determined. The stage of the tumor is displayed based on the amount of area calculated from the cluster.", "title": "" }, { "docid": "ce21a811ea260699c18421d99221a9f2", "text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.", "title": "" } ]
[ { "docid": "4240b62e2e78a65809bba386df94ae2a", "text": "This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a “MasterPrint,” a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.", "title": "" }, { "docid": "f5964bb7a6bca95fae0c3f923d3165fc", "text": "The growing number of storage security breaches as well as the need to adhere to government regulations is driving the need for greater storage protection. However, there is the lack of a comprehensive process to designing storage protection solutions. Designing protection for storage systems is best done by utilizing proactive system engineering rather than reacting with ad hoc countermeasures to the latest attack du jour. The purpose of threat modeling is to organize system threats and vulnerabilities into general classes to be addressed with known storage protection techniques. Although there has been prior work on threat modeling primarily for software applications, to our knowledge this is the first attempt at domain-specific threat modeling for storage systems. We discuss protection challenges unique to storage systems and propose two different processes to creating a threat model for storage systems: one based on classical security principles Confidentiality, Integrity, Availability, Authentication, or CIAA) and another based on the Data Lifecycle Model. It is our hope that this initial work will start a discussion on how to better design and implement storage protection solutions against storage threats.", "title": "" }, { "docid": "f1e293b4b896547b17b5becb1e06cb47", "text": "Occupational therapy has been an invisible profession, largely because the public has had difficulty grasping the concept of occupation. The emergence of occupational science has the potential of improving this situation. Occupational science is firmly rooted in the founding ideas of occupational therapy. In the future, the nature of human occupation will be illuminated by the development of a basic theory of occupational science. Occupational science, through research and theory development, will guide the practice of occupational therapy. Applications of occupational science to the practice of pediatric occupational therapy are presented. Ultimately, occupational science will prepare pediatric occupational therapists to better meet the needs of parents and their children.", "title": "" }, { "docid": "0302c64038f1b632d127cc6468361fd3", "text": "As human brain activities, represented by EEG brainwave signals, are more confidential, sensitive, and hard to steal and replicate, they hold great promise to provide a far more secure biometric approach for user identification and authentication. In this study, we present an EEG-based biometric security framework. Specifically, we propose to reduce the noise level through ensemble averaging and low-pass filter, extract frequency features using wavelet packet decomposition, and perform classification based on an artificial neural network. We explicitly discuss four different scenarios to emulate different application cases in authentication. Experimental results show that: the classification rates of distinguishing one subject or a small group of individuals (e.g., authorized personnel) from others (e.g., unauthorized personnel) can reach around 90%. However, it is also shown that recognizing each individual subject from a large pool has the worst performance with a classification rate of less than 11%. The side-by-side method shows an improvement on identifying all the subjects with classification rates of around 40%. Our study lays a solid foundation for future investigation of innovative, brainwave-based biometric approaches.", "title": "" }, { "docid": "9164dab8c4c55882f8caecc587c32eb1", "text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).", "title": "" }, { "docid": "e743bfe8c4f19f1f9a233106919c99a7", "text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "title": "" }, { "docid": "3aaa2d625cddd46f1a7daddbb3e2b23d", "text": "Text summarization is the task of shortening text documents but retaining their overall meaning and information. A good summary should highlight the main concepts of any text document. Many statistical-based, location-based and linguistic-based techniques are available for text summarization. This paper has described a novel hybrid technique for automatic summarization of Punjabi text. Punjabi is an official language of Punjab State in India. There are very few linguistic resources available for Punjabi. The proposed summarization system is hybrid of conceptual-, statistical-, location- and linguistic-based features for Punjabi text. In this system, four new location-based features and two new statistical features (entropy measure and Z score) are used and results are very much encouraging. Support vector machine-based classifier is also used to classify Punjabi sentences into summary and non-summary sentences and to handle imbalanced data. Synthetic minority over-sampling technique is applied for over-sampling minority class data. Results of proposed system are compared with different baseline systems, and it is found that F score, Precision, Recall and ROUGE-2 score of our system are reasonably well as compared to other baseline systems. Moreover, summary quality of proposed system is comparable to the gold summary.", "title": "" }, { "docid": "7abe1fd1b0f2a89bf51447eaef7aa989", "text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.", "title": "" }, { "docid": "3cde70842ee80663cbdc04db6a871d46", "text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.", "title": "" }, { "docid": "067364a5228ec9820fdf667bd8dbe460", "text": "— Autonomous vehicle navigation gains increasing importance in various growing application areas. In this paper we described a system it navigates the vehicle autonomously to its destination. This system provides a communication between vehicle and internet using GPRS modem. This system interfaced with OSRM open source map through internet. So we can decide the robot path from internet. In non-urban Domains such as deserts the problem of successful GPS-based navigation appears to be almost solved, navigation in urban domains particularly in the close vicinity of buildings is still a challenging problem. In such situations GPS accuracy significantly drops down due to unavailability of GPS signal. This project also improves the efficiency in navigation. This system not only relay on GPS. To improve the efficiency it uses location information from inertial sensors also. This system uses rotatable laser range finder for obstacle sensing. This is also designed in such a way that It can be monitored from anywhere through internet. I. INTRODUCTION An autonomous vehicle, also known as a driverless vehicle, self-driving vehicle is an vehicle capable of fulfilling the human transportation capabilities of a traditional vehicle. As an autonomous vehicle, it is capable of sensing its environment and navigating without human input. Autonomous vehicles sense their surroundings with such techniques as radar, lidar, GPS, and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. Some autonomous vehicles update their maps based on sensory input, allowing the vehicles to keep track of their position even when conditions change or when they enter uncharted environments. For any mobile robot, the ability to", "title": "" }, { "docid": "acfe7531f67a40e27390575a69dcd165", "text": "This paper reviews the relationship between attention deficit hyperactivity disorder (ADHD) and academic performance. First, the relationship at different developmental stages is examined, focusing on pre-schoolers, children, adolescents and adults. Second, the review examines the factors underpinning the relationship between ADHD and academic underperformance: the literature suggests that it is the symptoms of ADHD and underlying cognitive deficits not co-morbid conduct problems that are at the root of academic impairment. The review concludes with an overview of the literature examining strategies that are directed towards remediating the academic impairment of individuals with ADHD.", "title": "" }, { "docid": "9d1046d960724c193a29b7f387622c49", "text": "Optimal cache content placement in a wireless small cell base station (sBS) with limited backhaul capacity is studied. The sBS has a large cache memory and provides content-level selective offloading by delivering high data rate contents to users in its coverage area. The goal of the sBS content controller (CC) is to store the most popular contents in the sBS cache memory such that the maximum amount of data can be fetched directly form the sBS, not relying on the limited backhaul resources during peak traffic periods. If the popularity profile is known in advance, the problem reduces to a knapsack problem. However, it is assumed in this work that, the popularity profile of the files is not known by the CC, and it can only observe the instantaneous demand for the cached content. Hence, the cache content placement is optimised based on the demand history. By refreshing the cache content at regular time intervals, the CC tries to learn the popularity profile, while exploiting the limited cache capacity in the best way possible. Three algorithms are studied for this cache content placement problem, leading to different exploitation-exploration trade-offs. We provide extensive numerical simulations in order to study the time-evolution of these algorithms, and the impact of the system parameters, such as the number of files, the number of users, the cache size, and the skewness of the popularity profile, on the performance. It is shown that the proposed algorithms quickly learn the popularity profile for a wide range of system parameters.", "title": "" }, { "docid": "d7f5449cf398b56a29c64adada7cf7d2", "text": "Review The Prefrontal Cortex—An Update: Time Is of the Essence many of the principles discussed below apply also to the PFC of nonprimate species. Anatomy and Connections The PFC is the association cortex of the frontal lobe. In Los Angeles, California 90095 primates, it comprises areas 8–13, 24, 32, 46, and 47 according to the cytoarchitectonic map of Brodmann The physiology of the cerebral cortex is organized in (1909), recently updated for the monkey by Petrides and hierarchical manner. At the bottom of the cortical organi-Pandya (Figure 1). Phylogenetically, it is one of the latest zation, sensory and motor areas support specific sen-cortices to develop, having attained maximum relative sory and motor functions. Progressively higher areas—of growth in the human brain (Brodmann, 1912; Jerison, later phylogenetic and ontogenetic development—support 1994), where it constitutes nearly one-third of the neocor-functions that are progressively more integrative. The tex. Furthermore, the PFC undergoes late development in prefrontal cortex (PFC) constitutes the highest level of the course of ontogeny. In the human, by myelogenic and the cortical hierarchy dedicated to the representation synaptogenic criteria, the PFC is clearly late-maturing and execution of actions. The PFC can be subdivided in three major regions: Huttenlocher and Dabholkar, 1997). In the monkey's orbital, medial, and lateral. The orbital and medial re-PFC, myelogenesis also seems to develop late (Gibson, gions are involved in emotional behavior. The lateral 1991). However, the assumption that the synaptic struc-region, which is maximally developed in the human, pro-ture of the PFC lags behind that of other neocortical vides the cognitive support to the temporal organization areas has been challenged with morphometric data of behavior, speech, and reasoning. This function of (Bourgeois et al., 1994). In any case, imaging studies temporal organization is served by several subordinate indicate that, in the human, prefrontal areas do not attain functions that are closely intertwined (e.g., temporal in-full maturity until adolescence (Chugani et al., 1987; tegration, working memory, set). Whatever areal special-Paus et al., 1999; Sowell et al., 1999). This conclusion ization can be discerned in the PFC is not so much is consistent with the behavioral evidence that these attributable to the topographical distribution of those areas are critical for those higher cognitive functions functions as to the nature of the cognitive information that develop late, such as propositional speech and with which they operate. Much of the prevalent confu-reasoning. sion in the PFC literature derives from …", "title": "" }, { "docid": "45466c74a9f1c9a52e37cc0603f60923", "text": "In the square jigsaw puzzle problem one is required to reconstruct the complete image from a set of non-overlapping, unordered, square puzzle parts. Here we propose a fully automatic solver for this problem, where unlike some previous work, it assumes no clues regarding parts' location and requires no prior knowledge about the original image or its simplified (e.g., lower resolution) versions. To do so, we introduce a greedy solver which combines both informed piece placement and rearrangement of puzzle segments to find the final solution. Among our other contributions are new compatibility metrics which better predict the chances of two given parts to be neighbors, and a novel estimation measure which evaluates the quality of puzzle solutions without the need for ground-truth information. Incorporating these contributions, our approach facilitates solutions that surpass state-of-the-art solvers on puzzles of size larger than ever attempted before.", "title": "" }, { "docid": "2363f0f9b50bc2ebbccb0746bb6b1080", "text": "This communication presents a wideband, dual-polarized Vivaldi antenna or tapered slot antenna with over a decade (10.7:1) of bandwidth. The dual-polarized antenna structure is achieved by inserting two orthogonal Vivaldi antennas in a cross-shaped form without a galvanic contact. The measured -10 dB impedance bandwidth (S11) is approximately from 0.7 up to 7.30 GHz, corresponding to a 166% relative frequency bandwidth. The isolation (S21) between the antenna ports is better than 30 dB, and the measured maximum gain is 3.8-11.2 dB at the aforementioned frequency bandwidth. Orthogonal polarizations have the same maximum gain within the 0.7-3.6 GHz band, and a slight variation up from 3.6 GHz. The cross-polarization discrimination (XPD) is better than 19 dB across the measured 0.7-6.0 GHz frequency bandwidth, and better than 25 dB up to 4.5 GHz. The measured results are compared with the numerical ones in terms of S-parameters, maximum gain, and XPD.", "title": "" }, { "docid": "b81c0d819f2afb0a0ff79b7c6aeb8ff7", "text": "This paper proposes a framework to identify and evaluate companies from the technological perspective to support merger and acquisition (M&A) target selection decision-making. This employed a text mining-based patent map approach to identify companies which can fulfill a specific strategic purpose of M&A for enhancing technological capabilities. The patent map is the visualized technological landscape of a technology industry by using technological proximities among patents, so companies which closely related to the strategic purpose can be identified. To evaluate the technological aspects of the identified companies, we provide the patent indexes that evaluate both current and future technological capabilities and potential technology synergies between acquiring and acquired companies. Furthermore, because the proposed method evaluates potential targets from the overall corporate perspective and the specific strategic perspectives simultaneously, more robust and meaningful result can be obtained than when only one perspective is considered. Thus, the proposed framework can suggest the appropriate target companies that fulfill the strategic purpose of M&A for enhancing technological capabilities. For the verification of the framework, we provide an empirical study using patent data related to flexible display technology.", "title": "" }, { "docid": "11cca210d422b8410cce61a80203b17e", "text": "Internet of Things (IoT) has not yet reached a distinctive definition. A generic understanding of IoT is that it offers numerous services in many domains, utilizing conventional internet infrastructure by enabling different communication patterns such as human-to-object, object-to-objects, and object-to-object. Integrating IoT objects into the standard Internet, however, has unlocked several security challenges, as most internet technologies and connectivity protocols have been specifically designed for unconstrained objects. Moreover, IoT objects have their own limitations in terms of computation power, memory and bandwidth. IoT vision, therefore, has suffered from unprecedented attacks targeting not only individuals but also enterprises, some examples of these attacks are loss of privacy, organized crime, mental suffering, and the probability of jeopardizing human lives. Hence, providing a comprehensive classification of IoT attacks and their available countermeasures is an indispensable requirement. In this paper, we propose a novel four-layered IoT reference model based on building blocks strategy, in which we develop a comprehensive IoT attack model composed of four key phases. First, we have proposed IoT asset-based attack surface, which consists of four main components: 1) physical objects, 2) protocols covering whole IoT stack, 3) data, and 4) software. Second, we describe a set of IoT security goals. Third, we identify IoT attack taxonomy for each asset. Finally, we show the relationship between each attack and its violated security goals, and identify a set of countermeasures to protect each asset as well. To the best of our knowledge, this is the first paper that attempts to provide a comprehensive IoT attacks model based on a building-blocked reference model. Keywords—Internet of Things (IoT); building block; security and privacy; reference model", "title": "" }, { "docid": "945dea6576c6131fc33cd14e5a2a0be8", "text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.", "title": "" }, { "docid": "156c62aac106229928ba323cfb9bd53f", "text": "The Internet is becoming increasingly influential, but some observers have noted that heavy Internet users seem alienated from normal social contacts and may even cut these off as the Internet becomes the predominate social factor in their lives. Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis [American Psychologist 53 (1998) 65] carried out a longitudinal study from which they concluded that Internet use leads to loneliness among its users. However, their study did not take into account that the population of Internet users is not uniform and comprises many different personality types. People use the Internet in a variety of ways in keeping with their own personal preference. Therefore, the results of this interaction between personality and Internet use are likely to vary among different individuals and similarly the impact on user well-being will not be uniform. One of the personality characteristics that has been found to influence Internet use is that of extroversion and neuroticism [Hamburger & Ben-Artzi, Computers in Human Behavior 16 (2000) 441]. For this study, 89 participants completed questionnaires pertaining to their own Internet use and feelings of loneliness and extroversion and neuroticism. The results were compared to two models (a) the Kraut et al. (1998) model which argues that Internet use leads to loneliness (b) an alternative model which argues that it is those people who are already lonely who spend time on the Internet. A satisfactory goodness of fit was found for the alternative model. Building on these results, several different directions are suggested for continuing research in this field. # 2002 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "df9722b1cbdf217d26c20bd69dc775eb", "text": "Personal servers are an attractive concept: people carry around a device that takes care of computing, storage and communication on their behalf in a pervasive computing environment. So far personal servers have mainly been considered for accessing personal information. In this paper, we consider personal servers in the context of a digital key system. Digital keys are an interesting alternative to physical keys for mail or good delivery companies whose employees access tens of private buildings every day. We present a digital key system tailored for the current incarnation of personal servers, i.e., a Bluetooth-enabled mobile phone. We describe how to use Bluetooth for this application, we present a simple authentication protocol and we provide a detailed analysis of response time and energy consumption on the mobile phone.", "title": "" } ]
scidocsrr
918286cf87ae2a49d189f155a1afa88a
Grammatical form and semantic context in verb learning.
[ { "docid": "4b5ac4095cb2695a1e5282e1afca80a4", "text": "Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.", "title": "" } ]
[ { "docid": "24cecfd72f1339f4ab8873b167324cca", "text": "Web surfing is an example (and popular) Internet application where users desire services provided by servers that exist somewhere in the Internet. To provide the service, data must be routed between the user's system and the server. Local network routing (relative to the user) can not provide a complete route for the data. In the core Internet, a portion of the network controlled by a single administrative authority, called an autonomous system (AS), provides local network support and also exchanges routing information with other ASes using the border gateway protocol (BGP). Through the BGP route exchange, a complete route for the data is created. Security at this level in the Internet is challenging due to the lack of a single administration point and because there are numerous ASes which interact with one another using complex peering policies. This work reviews recent techniques to secure BGP. These security techniques are categorized as follows: 1) cryptographic/attestation, 2) database, 3) overlay/group protocols, 4) penalty, and 5) data-plane testing. The techniques are reviewed at a high level in a tutorial format, and shortcomings of the techniques are summarized as well. The depth of coverage for particular published works is intentionally kept minimal, so that the reader can quickly grasp the techniques. This survey provides a basis for evaluation of the techniques to understand coverage of published works as well as to determine the best avenues for future research.", "title": "" }, { "docid": "32cf33cbd55f05661703d028f9ffe40f", "text": "Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments.", "title": "" }, { "docid": "f2707d7fcd5d8d9200d4cc8de8ff1042", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "bf17acf28f242a0fd76117c9ef245f4d", "text": "We present an algorithm to compute the silhouette set of a point cloud. Previous methods extract point set silhouettes by thresholding point normals, which can lead to simultaneous overand under-detection of silhouettes. We argue that additional information such as surface curvature is necessary to resolve these issues. To this end, we develop a local reconstruction scheme using Gabriel and intrinsic Delaunay criteria and define point set silhouettes based on the notion of a silhouette generating set. The mesh umbrellas, or local reconstructions of one-ring triangles surrounding each point sample, generated by our method enable accurate silhouette identification near sharp features and close-by surface sheets, and provide the information necessary to detect other characteristic curves such as creases and boundaries. We show that these curves collectively provide a sparse and intuitive visualization of point cloud data.", "title": "" }, { "docid": "43d46f2db0f1f174700856d38ff9ee5f", "text": "The useof higher-order local autocorrelationsas features for patternrecognitionhasbeenacknowledgedsincemany years,but their applicabilitywasrestrictedto relatively low orders(2 or 3) andsmall local neighborhoods, dueto combinatorialincreasein computational costs. In this papera new methodfor usingthesefeaturesis presented,which allows the useof autocorrelationsof any orderandof largerneighborhoods. Themethodis closelyrelatedto theclassifierused,aSupportVectorMachine(SVM), andexploits the specialform of the inner productsof autocorrelationsand the propertiesof somekernel functions usedby SVMs. Using SVM, linear andnon-linearclassification functions can be learned,extending the previous works on higher-orderautocorrelationswhich were based on linearclassifiers.", "title": "" }, { "docid": "4762cbac8a7e941f26bce8217cf29060", "text": "The 2-D maximum entropy method not only considers the distribution of the gray information, but also takes advantage of the spatial neighbor information with using the 2-D histogram of the image. As a global threshold method, it often gets ideal segmentation results even when the image s signal noise ratio (SNR) is low. However, its time-consuming computation is often an obstacle in real time application systems. In this paper, the image thresholding approach based on the index of entropy maximization of the 2-D grayscale histogram is proposed to deal with infrared image. The threshold vector (t, s), where t is a threshold for pixel intensity and s is another threshold for the local average intensity of pixels, is obtained through a new optimization algorithm, namely, the particle swarm optimization (PSO) algorithm. PSO algorithm is realized successfully in the process of solving the 2-D maximum entropy problem. The experiments of segmenting the infrared images are illustrated to show that the proposed method can get ideal segmentation result with less computation cost. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6b1bee85de8d95896636bd4e13a69156", "text": "Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.", "title": "" }, { "docid": "8d29cf5303d9c94741a8d41ca6c71da9", "text": "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "title": "" }, { "docid": "d1515b3c475989e3c3584e02c0d5c329", "text": "Sexting has received increasing scholarly and media attention. Especially, minors’ engagement in this behaviour is a source of concern. As adolescents are highly sensitive about their image among peers and prone to peer influence, the present study implemented the prototype willingness model in order to assess how perceptions of peers engaging in sexting possibly influence adolescents’ willingness to send sexting messages. A survey was conducted among 217 15to 19-year-olds. A total of 18% of respondents had engaged in sexting in the 2 months preceding the study. Analyses further revealed that the subjective norm was the strongest predictor of sexting intention, followed by behavioural willingness and attitude towards sexting. Additionally, the more favourable young people evaluated the prototype of a person engaging in sexting and the higher they assessed their similarity with this prototype, the more they were willing to send sexting messages. Differences were also found based on gender, relationship status and need for popularity. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7696178f143665fa726706e39b133cb8", "text": "This article describes the essential components of oral health information systems for the analysis of trends in oral disease and the evaluation of oral health programmes at the country, regional and global levels. Standard methodology for the collection of epidemiological data on oral health has been designed by WHO and used by countries worldwide for the surveillance of oral disease and health. Global, regional and national oral health databanks have highlighted the changing patterns of oral disease which primarily reflect changing risk profiles and the implementation of oral health programmes oriented towards disease prevention and health promotion. The WHO Oral Health Country/Area Profile Programme (CAPP) provides data on oral health from countries, as well as programme experiences and ideas targeted to oral health professionals, policy-makers, health planners, researchers and the general public. WHO has developed global and regional oral health databanks for surveillance, and international projects have designed oral health indicators for use in oral health information systems for assessing the quality of oral health care and surveillance systems. Modern oral health information systems are being developed within the framework of the WHO STEPwise approach to surveillance of noncommunicable, chronic disease, and data stored in the WHO Global InfoBase may allow advanced health systems research. Sound knowledge about progress made in prevention of oral and chronic disease and in health promotion may assist countries to implement effective public health programmes to the benefit of the poor and disadvantaged population groups worldwide.", "title": "" }, { "docid": "9a438856b2cce32bf4e9bcbdc93795a2", "text": "By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with large effect sizes for both improved recall and recall latency. The optimization method achieved these benefits by using a modeling approach to develop a quantitative algorithm, which dynamically maximizes learning by determining for each item when the balance between increasing temporal spacing (that causes better long-term recall) and decreasing temporal spacing (that reduces the failure related time cost of each practice) means that the item is at the spacing interval where long-term gain per unit of practice time is maximal. As practice repetitions accumulate for each item, items become stable in memory and this optimal interval increases.", "title": "" }, { "docid": "4cef67c3b3633f5e7be3ec46f44adefb", "text": "User reviews of mobile apps often contain complaints or suggestions which are valuable for app developers to improve user experience and satisfaction. However, due to the large volume and noisy-nature of those reviews, manually analyzing them for useful opinions is inherently challenging. To address this problem, we propose MARK, a keyword-based framework for semi-automated review analysis. MARK allows an analyst describing his interests in one or some mobile apps by a set of keywords. It then finds and lists the reviews most relevant to those keywords for further analysis. It can also draw the trends over time of those keywords and detect their sudden changes, which might indicate the occurrences of serious issues. To help analysts describe their interests more effectively, MARK can automatically extract keywords from raw reviews and rank them by their associations with negative reviews. In addition, based on a vector-based semantic representation of keywords, MARK can divide a large set of keywords into more cohesive subsets, or suggest keywords similar to the selected ones.", "title": "" }, { "docid": "597be5d0d69b045dc6a17a9bcb36b85b", "text": "A conical log spiral antenna is presented with a new feeding scheme. This antenna is proposed for deployment on a CubeSat platform where the new feeding technique allows for easier antenna deployment. The antenna is composed of two arms that are wrapped around each other in a log-periodic manner. It is designed on top of a ground plane allowing its bottom feeding property. The feeding network is composed of quarter-wavelengths transmission lines connected to a planar balun. The feeding network along with the balun provides the antenna with the appropriate impedance matching and phase shift between the two arms. The antenna is fabricated and tested. The measurement and simulation results show good agreement.", "title": "" }, { "docid": "3945eafeeb9175e9cc19a8d4484a2205", "text": "We consider the problem of distributed estimation, where a set of nodes is required to collectively estimate some parameter of interest from noisy measurements. The problem is useful in several contexts including wireless and sensor networks, where scalability, robustness, and low power consumption are desirable features. Diffusion cooperation schemes have been shown to provide good performance, robustness to node and link failure, and are amenable to distributed implementations. In this work we focus on diffusion-based adaptive solutions of the LMS type. We motivate and propose new versions of the diffusion LMS algorithm that outperform previous solutions. We provide performance and convergence analysis of the proposed algorithms, together with simulation results comparing with existing techniques. We also discuss optimization schemes to design the diffusion LMS weights.", "title": "" }, { "docid": "91a919fa526704ff9c4562ae39aceeaa", "text": "We consider the problem of finding the shortest distance between all pairs of vertices in a complete digraph on n vertices, whose arc-lengths are non-negative random variables. We describe an algorithm which solves this problem in O(n(m + n log n)) expected time, where m is the expected number of arcs with finite length. If m is small enough, this represents a small improvement over the bound in Bloniarz [3]. We consider also the case when the arc-lengths are random variables which are independently distributed with distribution function F, where F(0) = 0 and F is differentiable at 0; for this case, we describe an algorithm which runs in O(n 2log n) expected time. In our treatment of the shortest-path problem we consider the following problem in combinatorial probability theory. A town contains n people, one of whom knows a rumour. At the first stage he tells someone chosen randomly from the town; at each stage, each person who knows the rumour tells someone else, chosen randomly from the town and independently of all other choices. Let Sn be the number of stages before the whole town knows the rumour. We show that Sn/log2n--\" 1 + loge 2 in probability as n ~ 0% and estimate the probabilities of large deviations in Sn.", "title": "" }, { "docid": "812da4f68dee219f12645beba701d686", "text": "Fog Computing is a new architecture to migrate some data center's tasks to the edge of the server. The fog computing, built on the edge servers, is viewed as a novel architecture that provides the limited computing, storing, and networking services in the distributed way between end devices and the traditional cloud computing Data Centers. It provides the logical intelligence to the end devices and filters the data for Data Centers. The primary objective of fog computing is to ensure the low and predictable latency in the latency-sensitive of Internet of Things (IoT) applications such as the healthcare services. This paper discusses the characteristics of fog computing and services that fog computing can provide in the healthcare system and its prospect.", "title": "" }, { "docid": "29b257283d31750828e4ccd0fbadd1dc", "text": "A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. A maximum-ratio combining receiver normally performs worse than a zero-forcing receiver. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option.", "title": "" }, { "docid": "d6cca63107e04f225b66e02289c601a2", "text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d4d9948e170edd124c57742d91a5d021", "text": "The attribute set in an information system evolves in time when new information arrives. Both lower and upper approximations of a concept will change dynamically when attributes vary. Inspired by the former incremental algorithm in Pawlak rough sets, this paper focuses on new strategies of dynamically updating approximations in probabilistic rough sets and investigates four propositions of updating approximations under probabilistic rough sets. Two incremental algorithms based on adding attributes and deleting attributes under probabilistic rough sets are proposed, respectively. The experiments on five data sets from UCI and a genome data with thousand attributes validate the feasibility of the proposed incremental approaches. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "239e37736832f6f0de050ed1749ba648", "text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.", "title": "" } ]
scidocsrr
e8d93048224bdc92b646576956c2634b
Findel: Secure Derivative Contracts for Ethereum
[ { "docid": "57ce739b1845a4b7e0ff5e2ebdd3b16d", "text": "Public key infrastructures (PKIs) enable users to look up and verify one another’s public keys based on identities. Current approaches to PKIs are vulnerable because they do not offer sufficiently strong guarantees of identity retention; that is, they do not effectively prevent one user from registering a public key under another’s already-registered identity. In this paper, we leverage the consistency guarantees provided by cryptocurrencies such as Bitcoin and Namecoin to build a PKI that ensures identity retention. Our system, called Certcoin, has no central authority and thus requires the use of secure distributed dictionary data structures to provide efficient support for key lookup.", "title": "" }, { "docid": "68c1a1fdd476d04b936eafa1f0bc6d22", "text": "Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.", "title": "" }, { "docid": "1315247aa0384097f5f9e486bce09bd4", "text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.", "title": "" } ]
[ { "docid": "e769f52b6e10ea1cf218deb8c95f4803", "text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.", "title": "" }, { "docid": "1580e188796e4e7b6c5930e346629849", "text": "This paper describes the development process of FarsNet; a lexical ontology for the Persian language. FarsNet is designed to contain a Persian WordNet with about 10000 synsets in its first phase and grow to cover verbs' argument structures and their selectional restrictions in its second phase. In this paper we discuss the semi-automatic approach to create the first phase: the Persian WordNet.", "title": "" }, { "docid": "7aeb10faf8590ed9f4054bafcd4dee0c", "text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.", "title": "" }, { "docid": "c3aaa53892e636f34d6923831a3b66bc", "text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.", "title": "" }, { "docid": "af4cbaf62356068c7702ac11d0b4e2b6", "text": "The rosy apple aphid Dysaphis plantaginea (Passerini) is a key pest in western European apple orchards; the economic damage threshold is so low that outbreaks cannot be forecasted. A mass rearing of the species on plantain (Plantago lanceolata L.) was initiated, with the aim to infest apple trees with either the autumn migrants, gynoparae and males, or the egg-laying females (oviparae). Here, data are presented about the propagation of the species on plantain, on the production of autumn migrants under laboratory conditions, and on the duration of juvenile development and reproductive capacities of both gynoparae and oviparae. Under long-day conditions (18 h light/day) on plantain, the thermal constant for the duration of juvenile development was 166 dd (day-degrees) above the lower development threshold of 5.1 °C, and daily larviposition amounted to 1.87 times the temperature (°C)minus 0.8, above a lower threshold of 4.3 °C. Between 32 and 36 larvae were produced by the young female before the first larvae become adult and their reproduction started to overshadow their mother’s. A plant freshly infested with 12 reproducing females and transferred to short-day conditions (12 h light/day) yielded up to 5,000 autumn migrants, with males in the majority. The first gynoparae appeared after about 25 days at both 16 and 20 °C, and males appeared after 40 and 33 days, respectively. Young adult gynoparae produced most of their about ten offspring right after landing on apples, unless temperature was well below 15 °C. The duration of juvenile development of these oviparae appeared to be rather variable and their egg-laying so protracted that each of these females needs to survive several weeks to produce a handful of winter eggs. With reproductive capacities of up to 14 progeny/female for gynoparae and 7.4 for oviparae, release of one gynopara in the field could theoretically lead to the deposition of 100 winter eggs.", "title": "" }, { "docid": "c2d3db65ce89b7df228880b72f620a4c", "text": "This paper presents a supply system design methodology for high-speed interface systems used in the design of a 1600 Mbps DDR3 interface in wirebond package. The high data rate and challenging system environment requires a system-level approach of supply noise mitigation that demonstrates the full spectrum of Power Integrity considerations typical in the design of high-speed interfaces. We will first discuss supply noise considerations during the architectural design phase used to define a supply mitigation strategy for the interface design. Next, we will discuss the physical implementation of the supply network component on the chip, the package, and the PCB using a co-design approach. Finally, we will present measurement data demonstrating the achieved supply quality and correlations to simulation results based on supply systems models developed during the design phase of the interface.", "title": "" }, { "docid": "3b1b829e6d017d574562e901f4963bc4", "text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.", "title": "" }, { "docid": "6a1da115f887498370b400efa6e57ed0", "text": "Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.", "title": "" }, { "docid": "dbdbff5b0d3738306099394d952bed83", "text": "High-flow nasal cannula (HFNC) therapy is increasingly proposed as first-line respiratory support for infants with acute viral bronchiolitis (AVB). Most teams use 2 L/kg/min, but no study compared different flow rates in this setting. We hypothesized that 3 L/kg/min would be more efficient for the initial management of these patients. A randomized controlled trial was performed in 16 pediatric intensive care units (PICUs) to compare these two flow rates in infants up to 6 months old with moderate to severe AVB and treated with HFNC. The primary endpoint was the percentage of failure within 48 h of randomization, using prespecified criteria of worsening respiratory distress and discomfort. From November 2016 to March 2017, 142 infants were allocated to the 2-L/kg/min (2L) flow rate and 144 to the 3-L/kg/min (3L) flow rate. Failure rate was comparable between groups: 38.7% (2L) vs. 38.9% (3L; p = 0.98). Worsening respiratory distress was the most common cause of failure in both groups: 49% (2L) vs. 39% (3L; p = 0.45). In the 3L group, discomfort was more frequent (43% vs. 16%, p = 0.002) and PICU stays were longer (6.4 vs. 5.3 days, p = 0.048). The intubation rates [2.8% (2L) vs. 6.9% (3L), p = 0.17] and durations of invasive [0.2 (2L) vs. 0.5 (3L) days, p = 0.10] and noninvasive [1.4 (2L) vs. 1.6 (3L) days, p = 0.97] ventilation were comparable. No patient had air leak syndrome or died. In young infants with AVB supported with HFNC, 3 L/kg/min did not reduce the risk of failure compared with 2 L/kg/min. This clinical trial was recorded on the National Library of Medicine registry (NCT02824744).", "title": "" }, { "docid": "f5886c4e73fed097e44d6a0e052b143f", "text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.", "title": "" }, { "docid": "c2b111e9c4e408a6660a4e73a0286858", "text": "Software-defined networking (SDN) has recently gained unprecedented attention from industry and research communities, and it seems unlikely that this will be attenuated in the near future. The ideas brought by SDN, although often described as a “revolutionary paradigm shift” in networking, are not completely new since they have their foundations in programmable networks and control-data plane separation projects. SDN promises simplified network management by enabling network automation, fostering innovation through programmability, and decreasing CAPEX and OPEX by reducing costs and power consumption. In this paper, we aim at analyzing and categorizing a number of relevant research works toward realizing SDN promises. We first provide an overview on SDN roots and then describe the architecture underlying SDN and its main components. Thereafter, we present existing SDN-related taxonomies and propose a taxonomy that classifies the reviewed research works and brings relevant research directions into focus. We dedicate the second part of this paper to studying and comparing the current SDN-related research initiatives and describe the main issues that may arise due to the adoption of SDN. Furthermore, we review several domains where the use of SDN shows promising results. We also summarize some foreseeable future research challenges.", "title": "" }, { "docid": "6e130fa88972e0e33e23beb14c522900", "text": "Myricetin is a flavonoid that is abundant in fruits and vegetables and has protective effects against cancer and diabetes. However, the mechanism of action of myricetin against gastric cancer (GC) is not fully understood. We researched myricetin on the proliferation, apoptosis, and cell cycle in GC HGC-27 and SGC7901 cells, to explore the underlying mechanism of action. Cell Counting Kit (CCK)-8 assay, Western blotting, cell cycle analysis, and apoptosis assay were used to evaluate the effects of myricetin on cell proliferation, apoptosis, and the cell cycle. To analyze the binding properties of ribosomal S6 kinase 2 (RSK2) with myricetin, surface plasmon resonance (SPR) analysis was performed. CCK8 assay showed that myricetin inhibited GC cell proliferation. Flow cytometry analysis showed that myricetin induces apoptosis and cell cycle arrest in GC cells. Western blotting indicated that myricetin influenced apoptosis and cell cycle arrest of GC cells by regulating related proteins. SPR analysis showed strong binding affinity of RSK2 and myricetin. Myricetin bound to RSK2, leading to increased expression of Mad1, and contributed to inhibition of HGC-27 and SGC7901 cell proliferation. Our results suggest the therapeutic potential of myricetin in GC.", "title": "" }, { "docid": "931c392507d6d7bccdc65d27ef2bbcab", "text": "Language processing becomes more and more important in multimedia processing. Although embedded vector representations of words offer impressive performance on many natural language processing (NLP) applications, the information of ordered input sequences is lost to some extent if only context-based samples are used in the training. For further performance improvement, two new post-processing techniques, called post-processing via variance normalization (PVN) and post-processing via dynamic embedding (PDE), are proposed in this work. The PVN method normalizes the variance of principal components of word vectors, while the PDE method learns orthogonal latent variables from ordered input sequences. The PVN and the PDE methods can be integrated to achieve better performance. We apply these post-processing techniques to several popular word embedding methods to yield their post-processed representations. Extensive experiments are conducted to demonstrate the effectiveness of the proposed post-processing techniques.", "title": "" }, { "docid": "1672b30a74bf5d1111b1f0892b4018bc", "text": "From the Divisions of Rheumatology, Allergy, and Immunology (M.R.M.) and Cardiology (D.M.D.); and the Departments of Radiology (J.Y.S.) and Pathology (R.P.H.), Massachusetts General Hospital; the Division of Rheumatology, Allergy, and Immunology, Brigham and Women’s Hospital (M.C.C.); and the Departments of Medicine (M.R.M., M.C.C., D.M.D.), Radiology (J.Y.S.), and Pathology (R.P.H.), Harvard Medical School — all in Boston.", "title": "" }, { "docid": "52d6711ebbafd94ab5404e637db80650", "text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "title": "" }, { "docid": "c8474e52d2e0812de0bc99d663e08be2", "text": "The relation between music listening and stress is inconsistently reported across studies, with the major part of studies being set in experimental settings. Furthermore, the psychobiological mechanisms for a potential stress-reducing effect remain unclear. We examined the potential stress-reducing effect of music listening in everyday life using both subjective and objective indicators of stress. Fifty-five healthy university students were examined in an ambulatory assessment study, both during a regular term week (five days) and during an examination week (five days). Participants rated their current music-listening behavior and perceived stress levels four times per day, and a sub-sample (n = 25) additionally provided saliva samples for the later analysis of cortisol and alpha-amylase on two consecutive days during both weeks. Results revealed that mere music listening was effective in reducing subjective stress levels (p = 0.010). The most profound effects were found when 'relaxation' was stated as the reason for music listening, with subsequent decreases in subjective stress levels (p ≤ 0.001) and lower cortisol concentrations (p ≤ 0.001). Alpha-amylase varied as a function of the arousal of the selected music, with energizing music increasing and relaxing music decreasing alpha-amylase activity (p = 0.025). These findings suggest that music listening can be considered a means of stress reduction in daily life, especially if it is listened to for the reason of relaxation. Furthermore, these results shed light on the physiological mechanisms underlying the stress-reducing effect of music, with music listening differentially affecting the physiological stress systems.", "title": "" }, { "docid": "f3295f975adac19269bd0c35fc49483f", "text": "This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch’s (Psychol Rev 102:211–245, 1995) theory of long-term working memory, Haider and Frensch’s (J Exp Psychol Learn Mem Cognit 25:172–190, 1999) information-reduction hypothesis, and the holistic model of image perception of Kundel et al. (Radiology 242:396–402, 2007). Eye movement and performance data were cumulated from 819 experts, 187 intermediates, and 893 novices. In support of the evaluated theories, experts, when compared with non-experts, had shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas; experts also had longer saccades and shorter times to first fixate relevant information, owing to superiority in parafoveal processing and selective attention allocation. Eye movements, reaction time, and performance accuracy were moderated by characteristics of visualization (dynamics, realism, dimensionality, modality, and text annotation), task (complexity, time-on-task, and task control), and domain (sports, medicine, transportation, other). These findings are discussed in terms of their implications for theories of visual expertise in professional domains and their significance for the design of learning environments.", "title": "" }, { "docid": "52db5c3530777a3ad47d16a0fb1b9556", "text": "Recently, a new form of structured summary on scientific papers is explored by grouping cited text spans from the reference paper. Its primary goal is to generate summaries based on the cited paper itself. Previously, traditional scientific summarization focused on citation-based methods by aggregating all citances that cite one unique paper without doing content-based citation analysis, while sometimes citations might differ between researchers or time slots. By investigating original text spans where scholars cited, the new method can reflect exact contributions of reference papers more. Therefore, how to identify cited text spans accurately becomes the first important problem to solve. Generally, it can be converted into finding the sentences in reference paper that is more similar with citation sentences. Taking it as a classification task, we investigate the potential of four actions to improve identification performance. Firstly, feature selections are conducted carefully according to multi-classifiers. Secondly, we apply sampling-based algorithms to preprocess class-imbalanced datasets. Since we integrated results via a weighted voting system, the third action is tuning parameters like, voting weights for multi-classifiers integration or running settings to see if we can improve performance further. Evaluation results show effectiveness of each action and demonstrate that researchers can take these actions for more accurate cited text spans identification when doing scientific summarization.", "title": "" }, { "docid": "c4b0d93105e434d4d407575157a005a4", "text": "Online Judge is widespread for the undergraduate to study programming. The users usually feel confused while locating the problems they prefer from the massive ones. This paper proposes a specialized recommendation model for the online judge systems in order to present the alternative problems to the users which they may be interested in potentially. In this model, a three-level collaborative filtering recommendation method is referred to and redesigned catering for the specific interaction mode of Online Judge. This method is described in detail in this paper and implemented in our demo system which demonstrates its availability.", "title": "" }, { "docid": "f2d27238148b255c2177ee577730d7fc", "text": "Search by keyword is an extremely popular method for retrieving music. To support this, novel algorithms that automatically tag music are being developed. The conventional way to evaluate audio tagging algorithms is to compute measures of agreement between the output and the ground truth set. In this work, we introduce a new method for evaluating audio tagging algorithms on a large scale by collecting set-level judgments from players of a human computation game called TagATune. We present the design and preliminary results of an experiment comparing five algorithms using this new evaluation metric, and contrast the results with those obtained by applying several conventional agreement-based evaluation metrics.", "title": "" } ]
scidocsrr
44dbd3476e4c61d72b07aa220d1feb04
Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network
[ { "docid": "9af70a99010198feeeaff39003faa0f0", "text": "In this paper, we propose a new framework for spectral-spatial classification of hyperspectral image data. The proposed approach serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously. An important contribution of our paper is the fact that we exploit the marginal probability distribution which uses the whole information in the hyperspectral data. We learn such distributions from both the spectral and spatial information contained in the original hyperspectral data using loopy belief propagation. The adopted probabilistic model is a discriminative random field in which the association potential is a multinomial logistic regression classifier and the interaction potential is a Markov random field multilevel logistic prior. Our experimental results with hyperspectral data sets collected using the National Aeronautics and Space Administration's Airborne Visible Infrared Imaging Spectrometer and the Reflective Optics System Imaging Spectrometer system indicate that the proposed framework provides state-of-the-art performance when compared to other similar developments.", "title": "" }, { "docid": "a27ec6150c03dd23ce3f8661a82a24dd", "text": "This paper introduces the concept and principles of hyperspectral imaging (HSI) and it briefly outlines how the defence and homeland security sectors can benefit from the application of this extremely versatile technology. This paper outlines the pros and cons of the various HSI system configurations, with particular emphasis on two of the most commonly deployed spectrograph techniques, namely, the dispersive system and the narrow-band tuning filter system. It describes how HSI can be utilized for target acquisition particularly when there is no a priori knowledge of the target, and then shows how it can be used for the recognition and tracking of targets with desired or known signature characteristics. The paper also briefly mentions the possibility of remote HSI being used for recognizing a human’s physiological state such as that induced by stress or anxiety. Real experimental data collected during the course of our research have been utilized throughout this paper to help understand the versatility and effectiveness of HSI technology.", "title": "" } ]
[ { "docid": "4ff75b22504d23c936610d3845337f1b", "text": "In the May 2007 issue of Pediatric Radiology, the article “Can classic metaphyseal lesions follow uncomplicated caesarean section?” [1] suggested that enough trauma could occur under these circumstances to produce fractures previously described as “highly specific for child abuse” [2]. However, the question of whether themetaphyses were normal to begin with was not raised. Why should this be an issue? Vitamin D deficiency (DD), initially believed to primarily affect the elderly and dark-skinned populations in the US, is now being demonstrated in otherwise healthy young adults, children, and infants of all races. In a review article on vitamin D published in the New England Journal of Medicine last year [3], Holick reviewed some of the recent literature, showing deficiency and insufficiency rates of 52% among Hispanic and African-American adolescents in Boston, 48% among white preadolescent females in Maine, 42% among African American females between 15 and 49 years of age, and 32% among healthy white men and women 18 to 29 years of age in Boston. A recent study of healthy infants and toddlers aged 8 to 24 months in Boston found an insufficiency rate of 40% and a deficiency rate of 12.1% [4]. In September 2007, a number of articles about congenital rickets were published in the Archives of Diseases in Childhood including an international perspective of mother and newborn DD reported from around the world [5]. Concentrations of 25-hydroxyvitamin D [25(OH)D] less than 25 nmol/l (10 ng/ml) were found in 18%, 25%, 80%, 42% and 61% of pregnant women in the UK, UAE, Iran, northern India and New Zealand, respectively, and in 60 to 84% of non-western women in the Netherlands. Currently, most experts in the US define DD as a 25(OH)D level less than 50 nmol/l (20 ng/ml). Levels between 20 and 30 ng/ml are considered to indicate insufficiency, reflecting increasing parathyroid hormone (PTH) levels and decreasing calcium absorption [3]. With such high prevalence of DD in our healthy young women, congenital deficiency is inevitable, since neonatal 25(OH)D concentrations are approximately two-thirds the maternal level [6]. Bodnar et al. [7] at the University of Pittsburgh, in the largest US study of mother and newborn infant vitamin D levels, found deficient or insufficient levels in 83% of black women and 92% of their newborns, as well as in 47% of white women and 66% of their newborns. The deficiencies were worse in the winter than in the summer. Over 90% of these women were on prenatal vitamins. Research is currently underway to formulate more appropriate recommendations for vitamin D supplementation during pregnancy (http://clinicaltrials.gov, ID: R01 HD043921). The obvious question is, “Why has DD once again become so common?” Multiple events have led to the high rates of DD. In the past, many foods were fortified with Pediatr Radiol (2008) 38:1210–1216 DOI 10.1007/s00247-008-1001-z", "title": "" }, { "docid": "bbb6b192974542b165d3f7a0d139a8e1", "text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.", "title": "" }, { "docid": "29f8f508808b9c602abc776eefeac77c", "text": "Phase shifters based on double dielectric slab-loaded air-filled substrate-integrated waveguide (SIW) are proposed for high-performance applications at millimeter-wave frequencies. The three-layered air-filled SIW, made of a low-cost multilayer printed circuit board process, allows for substantial loss reduction and power handling capability enhancement compared with the conventional dielectric-filled counterpart. It is of particular interest for millimeter-wave applications that generally require low-loss transmission and high-density power handling. Its top and bottom layers may make use of a low-cost standard substrate, such as FR-4, on which baseband analog or digital circuits can be implemented so to obtain very compact, low cost, and self-packaged millimeter-wave integrated systems compared with the systems based on rectangular waveguide while achieving higher performance than the systems based on the conventional SIW. In this paper, it is demonstrated that transmission loss can be further improved at millimeter-wave frequencies with an additional polishing of the top and bottom conductor surfaces. Over Ka-band, an improvement of average 1.56 dB/m is experimentally demonstrated. Using the air-filled SIW fabrication process, dielectric slabs can be implemented along conductive via rows without any additional process. Based on the propagation properties of the obtained double dielectric slab-loaded air-filled SIW, phase shifters are proposed. To obtain a broadband response, an equal-length compensated phase shifter made of two air-filled SIW structures, offering a reverse varying propagation constant difference against frequency, is proposed and demonstrated at Ka-band. Finally, a single dielectric slab phase shifter is investigated for comparison and its bandwidth limitation is highlighted.", "title": "" }, { "docid": "81e0b85a142a81f9e2012f050c43fb43", "text": "The activation of under frequency load shedding (UFLS) is the last automated action against the severe frequency drops in order to rebalance the system. In this paper, the setting parameters of a multistage load shedding plan are obtained and optimized using a discretized model of dynamic system frequency response. The uncertainties of system parameters including inertia time constant, load damping, and generation deficiency are taken into account. The proposed UFLS model is formulated as a mixed-integer linear programming optimization problem to minimize the expected amount of load shedding. The activation of rate-of-change-of-frequency relays as the anti-islanding protection of distributed generators is considered. The Monte Carlo simulation method is utilized for modeling the uncertainties of system parameters. The results of probabilistic UFLS are then utilized to design four different UFLS strategies. The proposed dynamic UFLS plans are simulated over the IEEE 39-bus and the large-scale practical Iranian national grid.", "title": "" }, { "docid": "7d0dfce24bd539cb790c0c25348d075d", "text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Speci€cally, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classi€er with a consistency guarantee. Œe relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. Œe proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Arti€cial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, Œe University of Sydney, Darlington, NSW 2008, Australia, fehe7727@uni.sydney.edu.au; tongliang.liu@sydney.edu.au; dacheng.tao@sydney.edu.au. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo‚.webb@monash.edu. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8", "title": "" }, { "docid": "c3ad915ac57bf56c4adc47acee816b54", "text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.", "title": "" }, { "docid": "7a076d150ecc4382c20a6ce08f3a0699", "text": "Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.", "title": "" }, { "docid": "1040e96ab179d5705eeb2983bdef31d3", "text": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.", "title": "" }, { "docid": "bb3ba0a17727d2ea4e2aba74f7144da6", "text": "A roof automobile antenna module for Long Term Evolution (LTE) application is proposed. The module consists of two LTE antennas for the multiple-input multiple-output (MIMO) method which requests low mutual coupling between the antennas for larger capacity. On the other hand, the installation location for a roof-top module is limited from safety or appearance viewpoint and this makes the multiple LTE antennas located there cannot be separated with enough space. In order to retain high isolation between the two antennas in such compact space, the two antennas are designed to have different shapes, different heights and different polarizations, and their ground planes are placed separately. In the proposed module, one antenna is a monopole type and has its element printed on a shark-fin-shaped substrate which is perpendicular to the car-roof. Another one is a planar inverted-F antenna (PIFA) and has its element on a lower plane parallel to the roof. In this manner, the two antennas cover the LTE-bands with omni-directional radiation in the horizontal directions and high radiation gain. The two antennas have reasonably good isolation between them even the module is compact with a dimension of 62×65×73 mm3.", "title": "" }, { "docid": "c9065814777e0815da0ceb6a1a1b624a", "text": "Axial and radial power peaking factors (Fq, Fah) were estimated in Chashma Nuclear Power Plant Unit-1 (C-1) core using artificial Neural Network Technique (ANNT). Position of T4 control bank, axial offsets in four quadrants and quadrant power tilt ratios were taken as input variables in neural network designing. Power Peaking Factors (PPF) were calculated using computer codes FCXS, TWODFD and 3D-NB-2P for 52 core critical conditions made during C-1 fuel cycle-7. A multilayered Perceptron (MLP) neural network was trained by applying a set of measured input parameters and calculated output data for each core state. Training average relative errors between targets and ANNT estimated peaking factors were ranged from 0.018% to 0.054%, implies that ANNT introduces negligible error during training and exactly map the values. For validation process, PPF were estimated using ANNT for 36 cases devised at the time when power distribution measurement test and in-core/ex-core detectors calibration test were performed during fuel cycle. ANNT Results were compared with C-1 peaking factors measured with in-core flux mapping system and INCOPW computer code. Results showed that ANNT estimated PPF deviated from C-1 measured values within ±3%. The results of this study indicate that ANNT is an alternate technique for PPF measurement using only ex-core detectors signals data and independent of in-core flux mapping system. It might increase the time interval between in-core flux maps to 180 Effective Full Power Days (EFPDs) and reduce usage frequency of in-core flux mapping system during fuel cycle as present in Advanced Countries Nuclear Power Plants.", "title": "" }, { "docid": "8d6ebefca528255bc14561e1106522af", "text": "Constant power loads may yield instability due to the well-known negative impedance characteristic. This paper analyzes the factors that cause instability of a dc microgrid with multiple dc–dc converters. Two stabilization methods are presented for two operation modes: 1) constant voltage source mode; and 2) droop mode, and sufficient conditions for the stability of the dc microgrid are obtained by identifying the eigenvalues of the Jacobian matrix. The key is to transform the eigenvalue problem to a quadratic eigenvalue problem. When applying the methods in practical engineering, the salient feature is that the stability parameter domains can be estimated by the available constraints, such as the values of capacities, inductances, maximum load power, and distances of the cables. Compared with some classical methods, the proposed methods have wider stability region. The simulation results based on MATLAB/simulink platform verify the feasibility of the methods.", "title": "" }, { "docid": "08ccf9eacd74773f035dfdce4c9ca250", "text": "The postmodern organization has a design paradox in which leaders are concerned with efficiency and control as well as complex functioning. Traditional leadership theory has limited applicability to postmodern organizations as it is mainly focused on efficiency and control. As a result, a new theory of leadership that recognizes the design paradox has been proposed: complexity leadership theory. This theory conceptualizes the integration of formal leadership roles with complex functioning. Our particular focus is on leadership style and its effect as an enabler of complex functioning. We introduce dynamic network analysis, a new methodology for modeling and analyzing organizations as complex adaptive networks. Dynamic network analysis is a methodology that quantifies complexity leadership theory. Data was collected from a real-world network organization and dynamic network analysis used to explore the effects of leadership style as an enabler of complex functioning. Results and implications are discussed in relation to leadership theory and practice.", "title": "" }, { "docid": "dbbd98ed1a7ee32ab9626a923925c45d", "text": "In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.", "title": "" }, { "docid": "b50b912cb79368db51825e7cbea2df5d", "text": "Effectively solving the problem of sketch generation, which aims to produce human-drawing-like sketches from real photographs, opens the door for many vision applications such as sketch-based image retrieval and nonphotorealistic rendering. In this paper, we approach automatic sketch generation from a human visual perception perspective. Instead of gathering insights from photographs, for the first time, we extract information from a large pool of human sketches. In particular, we study how multiple Gestalt rules can be encapsulated into a unified perceptual grouping framework for sketch generation. We further show that by solving the problem of Gestalt confliction, i.e., encoding the relative importance of each rule, more similar to human-made sketches can be generated. For that, we release a manually labeled sketch dataset of 96 object categories and 7,680 sketches. A novel evaluation framework is proposed to quantify human likeness of machinegenerated sketches by examining how well they can be classified using models trained from human data. Finally, we demonstrate the superiority of our sketches under the practical application of sketch-based image retrieval.", "title": "" }, { "docid": "f6f1462e8edd8200948168423c87c1bf", "text": "Users' behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, viewing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about.\n In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary.\n The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches.", "title": "" }, { "docid": "5b2fbfe1e9ceb9cb9e969df992ea1271", "text": "Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model.", "title": "" }, { "docid": "f27547cfee95505fe8a2f44f845ddaed", "text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.", "title": "" }, { "docid": "c14da39ea48b06bfb01c6193658df163", "text": "We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.", "title": "" }, { "docid": "e63a5af56d8b20c9e3eac658940413ce", "text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.", "title": "" }, { "docid": "4663b254bc9c93d19ca1accb2c34ac5c", "text": "Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog nodes in fog computing decide to either process the services using its available resource or send to the cloud server. Thus, fog computing helps to achieve efficient resource utilization and higher performance regarding the delay, bandwidth, and energy consumption. This survey starts by providing an overview and fundamental of fog computing architecture. Furthermore, service and resource allocation approaches are summarized to address several critical issues such as latency, and bandwidth, and energy consumption in fog computing. Afterward, compared to other surveys, this paper provides an extensive overview of state-of-the-art network applications and major research aspects to design these networks. In addition, this paper highlights ongoing research effort, open challenges, and research trends in fog computing.", "title": "" } ]
scidocsrr
e40826b05fcfa1dcefdd4b62c5fe6e8f
Security and privacy challenges in industrial Internet of Things
[ { "docid": "11ed7e0742ddb579efe6e1da258b0d3c", "text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.", "title": "" }, { "docid": "223a7496c24dcf121408ac3bba3ad4e5", "text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.", "title": "" }, { "docid": "2e8333674a0b9c782aa3796b6475bdf7", "text": "As embedded systems are more than ever present in our society, their security is becoming an increasingly important issue. However, based on the results of many recent analyses of individual firmware images, embedded systems acquired a reputation of being insecure. Despite these facts, we still lack a global understanding of embedded systems’ security as well as the tools and techniques needed to support such general claims. In this paper we present the first public, large-scale analysis of firmware images. In particular, we unpacked 32 thousand firmware images into 1.7 million individual files, which we then statically analyzed. We leverage this large-scale analysis to bring new insights on the security of embedded devices and to underline and detail several important challenges that need to be addressed in future research. We also show the main benefits of looking at many different devices at the same time and of linking our results with other large-scale datasets such as the ZMap’s HTTPS survey. In summary, without performing sophisticated static analysis, we discovered a total of 38 previously unknown vulnerabilities in over 693 firmware images. Moreover, by correlating similar files inside apparently unrelated firmware images, we were able to extend some of those vulnerabilities to over 123 different products. We also confirmed that some of these vulnerabilities altogether are affecting at least 140K devices accessible over the Internet. It would not have been possible to achieve these results without an analysis at such wide scale. We believe that this project, which we plan to provide as a firmware unpacking and analysis web service, will help shed some light on the security of embedded devices. http://firmware.re", "title": "" } ]
[ { "docid": "71164831cb7376d92461f1cfd95c9244", "text": "Blood coagulation and complement pathways are two important natural defense systems. The high affinity interaction between the anticoagulant vitamin K-dependent protein S and the complement regulator C4b-binding protein (C4BP) is a direct physical link between the two systems. In human plasma, ~70% of total protein S circulates in complex with C4BP; the remaining is free. The anticoagulant activity of protein S is mainly expressed by the free form, although the protein S-C4BP complex has recently been shown to have some anticoagulant activity. The high affinity binding of protein S to C4BP provides C4BP with the ability to bind to negatively charged phospholipid membranes, which serves the purpose of localizing complement regulatory activity close to the membrane. Even though C4BP does not directly affect the coagulation system, it still influences the regulation of blood coagulation through its interaction with protein S. This is particularly important in states of inherited deficiency of protein S where the tight binding of protein S to C4BP results in a pronounced and selective drop in concentration of free protein S, whereas the concentration of protein S in complex with C4BP remains relatively unchanged. This review summarizes the current knowledge on C4BP with respect to its association with thrombosis and hemostasis.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" }, { "docid": "82f029ebcca0216bccfdb21ab13ac593", "text": "Presently, middleware technologies abound for the Internet-of-Things (IoT), directed at hiding the complexity of underlying technologies and easing the use and management of IoT resources. The middleware solutions of today are capable technologies, which provide much advanced services and that are built using superior architectural models, they however fail short in some important aspects: existing middleware do not properly activate the link between diverse applications with much different monitoring purposes and many disparate sensing networks that are of heterogeneous nature and geographically dispersed. Then, current middleware are unfit to provide some system-wide global arrangement (intelligence, routing, data delivery) emerging from the behaviors of the constituent nodes, rather than from the coordination of single elements, i.e. self-organization. This paper presents the SIMPLE self-organized and intelligent middleware platform. SIMPLE middleware innovates from current state-of-research exactly by exhibiting self-organization properties, a focus on data-dissemination using multi-level subscriptions processing and a tiered networking approach able to cope with many disparate, widespread and heterogeneous sensing networks (e.g. WSN). In this way, the SIMLE middleware is provided as robust zero-configuration technology, with no central dependable system, immune to failures, and able to efficiently deliver the right data at the right time, to needing applications.", "title": "" }, { "docid": "3ce03df4e5faa4132b2e791833549525", "text": "Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac diseases, yet still a challenge due to the high variability of cardiac structure and the complexity of temporal dynamics. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one cardiac phase, is even more challenging since the uncertain relatedness intra and inter each type of indices may hinder the learning procedure from better convergence and generalization. In this paper, we propose a newly-designed multitask learning network (FullLVNet), which is constituted by a deep convolution neural network (CNN) for expressive feature embedding of cardiac structure; two followed parallel recurrent neural network (RNN) modules for temporal dynamic modeling; and four linear models for the final estimation. During the final estimation, both intraand inter-task relatedness are modeled to enforce improvement of generalization: (1) respecting intra-task relatedness, group lasso is applied to each of the regression tasks for sparse and common feature selection and consistent prediction; (2) respecting inter-task relatedness, three phase-guided constraints are proposed to penalize violation of the temporal behavior of the obtained LV indices. Experiments on MR sequences of 145 subjects show that FullLVNet achieves high accurate prediction with our intraand inter-task relatedness, leading to MAE of 190 mm, 1.41 mm, 2.68 mm for average areas, RWT, dimensions and error rate of 10.4% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function.", "title": "" }, { "docid": "9042faed1193b7bc4c31f2bc239c5d89", "text": "Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. This paper presents a comparative study of four classification algorithms for static hand gesture classification using two different hand features data sets. The approach used consists in identifying hand pixels in each frame, extract features and use those features to recognize a specific hand pose. The results obtained proved that the ANN had a very good performance and that the feature selection and data preparation is an important phase in the all process, when using low-resolution images like the ones obtained with the camera in the current work.", "title": "" }, { "docid": "032589c39e258890e29196ca013a3e22", "text": "We describe Charm++, an object oriented portable parallel programming language based on Cff. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of Cft with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latencytolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects. Charm++ provides specific modes for sharing information between parallel objects. Extensive dynamic load balancing strategies are provided. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.", "title": "" }, { "docid": "d979fdf75f2e555fa591a2e49d985d0e", "text": "Motion Coordination for VTOL Unmanned Aerial Vehicles develops new control design techniques for the distributed coordination of a team of autonomous unmanned aerial vehicles. In particular, it provides new control design approaches for the attitude synchronization of a formation of rigid body systems. In addition, by integrating new control design techniques with some concepts from nonlinear control theory and multi-agent systems, it presents a new theoretical framework for the formation control of a class of under-actuated aerial vehicles capable of vertical take-off and landing.", "title": "" }, { "docid": "e143eb298fff97f8f58cc52caa945640", "text": "Supervised domain adaptation—where a large generic corpus and a smaller indomain corpus are both available for training—is a challenge for neural machine translation (NMT). Standard practice is to train a generic model and use it to initialize a second model, then continue training the second model on in-domain data to produce an in-domain model. We add an auxiliary term to the training objective during continued training that minimizes the cross entropy between the indomain model’s output word distribution and that of the out-of-domain model to prevent the model’s output from differing too much from the original out-ofdomain model. We perform experiments on EMEA (descriptions of medicines) and TED (rehearsed presentations), initialized from a general domain (WMT) model. Our method shows improvements over standard continued training by up to 1.5 BLEU.", "title": "" }, { "docid": "321049dbe0d9bae5545de3d8d7048e01", "text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.", "title": "" }, { "docid": "48b6f2cb0c9fd50619f08c433ea40068", "text": "The medicinal value of cannabis (marijuana) is well documented in the medical literature. Cannabinoids, the active ingredients in cannabis, have many distinct pharmacological properties. These include analgesic, anti-emetic, anti-oxidative, neuroprotective and anti-inflammatory activity, as well as modulation of glial cells and tumor growth regulation. Concurrent with all these advances in the understanding of the physiological and pharmacological mechanisms of cannabis, there is a strong need for developing rational guidelines for dosing. This paper will review the known chemistry and pharmacology of cannabis and, on that basis, discuss rational guidelines for dosing.", "title": "" }, { "docid": "3072c5458a075e6643a7679ccceb1417", "text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.", "title": "" }, { "docid": "aee5eb38d6cbcb67de709a30dd37c29a", "text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.", "title": "" }, { "docid": "5596f6d7ebe828f4d6f5ab4d94131b1d", "text": "A successful quality model is indispensable in a rich variety of multimedia applications, e.g., image classification and video summarization. Conventional approaches have developed many features to assess media quality at both low-level and high-level. However, they cannot reflect the process of human visual cortex in media perception. It is generally accepted that an ideal quality model should be biologically plausible, i.e., capable of mimicking human gaze shifting as well as the complicated visual cognition. In this paper, we propose a biologically inspired quality model, focusing on interpreting how humans perceive visually and semantically important regions in an image (or a video clip). Particularly, we first extract local descriptors (graphlets in this work) from an image/frame. They are projected onto the perceptual space, which is built upon a set of low-level and high-level visual features. Then, an active learning algorithm is utilized to select graphlets that are both visually and semantically salient. The algorithm is based on the observation that each graphlet can be linearly reconstructed by its surrounding ones, and spatially nearer ones make a greater contribution. In this way, both the local and global geometric properties of an image/frame can be encoded in the selection process. These selected graphlets are linked into a so-called biological viewing path (BVP) to simulate human visual perception. Finally, the quality of an image or a video clip is predicted by a probabilistic model. Experiments shown that 1) the predicted BVPs are over 90% consistent with real human gaze shifting paths on average; and 2) our quality model outperforms many of its competitors remarkably.", "title": "" }, { "docid": "5dee244ee673909c3ba3d3d174a7bf83", "text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.", "title": "" }, { "docid": "f68161697aed6d12598b0b9e34aeae68", "text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.", "title": "" }, { "docid": "53272bf6e5a466a361987feaad09a9e2", "text": "Biomechanical energy harvesting is a feasible solution for powering wearable sensors by directly driving electronics or acting as wearable self-powered sensors. A wearable insole that not only can harvest energy from foot pressure during walking but also can serve as a self-powered human motion recognition sensor is reported. The insole is designed as a sandwich structure consisting of two wavy silica gel film separated by a flexible piezoelectric foil stave, which has higher performance compared with conventional piezoelectric harvesters with cantilever structure. The energy harvesting insole is capable of driving some common electronics by scavenging energy from human walking. Moreover, it can be used to recognize human motion as the waveforms it generates change when people are in different locomotion modes. It is demonstrated that different types of human motion such as walking and running are clearly classified by the insole without any external power source. This work not only expands the applications of piezoelectric energy harvesters for wearable power supplies and self-powered sensors, but also provides possible approaches for wearable self-powered human motion monitoring that is of great importance in many fields such as rehabilitation and sports science.", "title": "" }, { "docid": "19ea9b23f8757804c23c21293834ff3f", "text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.", "title": "" }, { "docid": "35f8b54ee1fbf153cb483fc4639102a5", "text": "This research studies the risk prediction of hospital readmissions using metaheuristic and data mining approaches. This is a critical issue in the U.S. healthcare system because a large percentage of preventable hospital readmissions derive from a low quality of care during patients’ stays in the hospital as well as poor arrangement of the discharge process. To reduce the number of hospital readmissions, the Centers for Medicare and Medicaid Services has launched a readmission penalty program in which hospitals receive reduced reimbursement for high readmission rates for Medicare beneficiaries. In the current practice, patient readmission risk is widely assessed by evaluating a LACE score including length of stay (L), acuity level of admission (A), comorbidity condition (C), and use of emergency rooms (E). However, the LACE threshold classifying highand low-risk readmitted patients is set up by clinic practitioners based on specific circumstances and experiences. This research proposed various data mining approaches to identify the risk group of a particular patient, including neural network model, random forest (RF) algorithm, and the hybrid model of swarm intelligence heuristic and support vector machine (SVM). The proposed neural network algorithm, the RF and the SVM classifiers are used to model patients’ characteristics, such as their ages, insurance payers, medication risks, etc. Experiments are conducted to compare the performance of the proposed models with previous research. Experimental results indicate that the proposed prediction SVM model with particle swarm parameter tuning outperforms other algorithms and achieves 78.4% on overall prediction accuracy, 97.3% on sensitivity. The high sensitivity shows its strength in correctly identifying readmitted patients. The outcome of this research will help reduce overall hospital readmission rates and allow hospitals to utilize their resources more efficiently to enhance interventions for high-risk patients. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "97fa48d92c4a1b9d2bab250d5383173c", "text": "This paper presents a new type of axial flux motor, the yokeless and segmented armature (YASA) topology. The YASA motor has no stator yoke, a high fill factor and short end windings which all increase torque density and efficiency of the machine. Thus, the topology is highly suited for high performance applications. The LIFEcar project is aimed at producing the world's first hydrogen sports car, and the first YASA motors have been developed specifically for the vehicle. The stator segments have been made using powdered iron material which enables the machine to be run up to 300 Hz. The iron in the stator of the YASA motor is dramatically reduced when compared to other axial flux motors, typically by 50%, causing an overall increase in torque density of around 20%. A detailed Finite Element analysis (FEA) analysis of the YASA machine is presented and it is shown that the motor has a peak efficiency of over 95%.", "title": "" }, { "docid": "3b1d73691176ada154bab7716c6e776c", "text": "Purpose – The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure. Design/methodology/approach – A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis. Findings – The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing. Research limitations/implications – The research was conducted in the high-tech industry, which may limit the generalisability of the findings. Practical implications – The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions. Originality/value – The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.", "title": "" } ]
scidocsrr
cac482b007ef3913880bafa4feb73f84
Visual Semantic Planning Using Deep Successor Representations
[ { "docid": "fb8455a00e4af693a3926746fd1fcf01", "text": "Video games are a compelling source of annotated data as they can readily provide fine-grained groundtruth for diverse tasks. However, it is not clear whether the synthetically generated data has enough resemblance to the real-world images to improve the performance of computer vision models in practice. We present experiments assessing the effectiveness on real-world data of systems trained on synthetic RGB images that are extracted from a video game. We collected over 60,000 synthetic samples from a modern video game with similar conditions to the real-world CamVid and Cityscapes datasets. We provide several experiments to demonstrate that the synthetically generated RGB images can be used to improve the performance of deep neural networks on both image segmentation and depth estimation. These results show that a convolutional network trained on synthetic data achieves a similar test error to a network that is trained on real-world data for dense image classification. Furthermore, the synthetically generated RGB images can provide similar or better results compared to the real-world datasets if a simple domain adaptation technique is applied. Our results suggest that collaboration with game developers for an accessible interface to gather data is potentially a fruitful direction for future work in computer vision.", "title": "" }, { "docid": "3ac18d1126ce613325d14d282164042c", "text": "Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment’s dynamics remain the same. Our approach rests on two key ideas: successor features, a value function representation that decouples the dynamics of the environment from the rewards, and generalized policy improvement, a generalization of dynamic programming’s policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm.", "title": "" }, { "docid": "9dd245f75092adc8d8bb2b151275789b", "text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.", "title": "" }, { "docid": "6286480f676c75e1cac4af9329227258", "text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.", "title": "" } ]
[ { "docid": "6c11b5d9ec8a89f843b08fe998de194c", "text": "As large-scale multimodal data are ubiquitous in many real-world applications, learning multimodal representations for efficient retrieval is a fundamental problem. Most existing methods adopt shallow structures to perform multimodal representation learning. Due to a limitation of learning ability of shallow structures, they fail to capture the correlation of multiple modalities. Recently, multimodal deep learning was proposed and had proven its superiority in representing multimodal data due to its high nonlinearity. However, in order to learn compact and accurate representations, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In order to address the aforementioned problem, in this paper we propose a hashing-based orthogonal deep model to learn accurate and compact multimodal representations. The method can better capture the intra-modality and inter-modality correlations to learn accurate representations. Meanwhile, in order to make the representations compact, the hashing-based model can generate compact hash codes and the proposed orthogonal structure can reduce the redundant information lying in the codes by imposing orthogonal regularizer on the weighting matrices. We also theoretically prove that, in this case, the learned codes are guaranteed to be approximately orthogonal. Moreover, considering the different characteristics of different modalities, effective representations can be attained with different number of layers for different modalities. Comprehensive experiments on three real-world datasets demonstrate a substantial gain of our method on retrieval tasks compared with existing algorithms.", "title": "" }, { "docid": "7029d1f66732c45816ce9b7b5554f884", "text": "The most critical problem in the world is to meet the energy demand, because of steadily increasing energy consumption. Refrigeration systems` electricity consumption has big portion in overall consumption. Therefore, considerable attention has been given to refrigeration capacity modulation system in order to decrease electricity consumption of these systems. Capacity modulation is used to meet exact amount of load at partial load and lowered electricity consumption by avoiding over capacity using. Variable speed refrigeration systems are the most common capacity modulation method for commercially and household purposes. Although the vapor compression refrigeration designed to satisfy the maximum load, they work at partial load conditions most of their life cycle and they are generally regulated as on/off controlled. The experimental chiller system contains four main components: compressor, condenser, expansion device, and evaporator in Fig.1 where this study deals with effects of different control methods on variable speed compressor (VSC) and electronic expansion valve (EEV). This chiller system has a scroll type VSC and a stepper motor controlled EEV.", "title": "" }, { "docid": "9ff6c86b2920c10d33e1b3d52fbc92d8", "text": "In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data-modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided a weakly established model based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Meanwhile, analyzing and learning a large amount of tfMRI data from a variety of subjects has been shown to be very demanding but yet challenging even with technological advances in computational hardware. Given the Convolutional Neural Network (CNN), a robust method for learning high-level abstractions from low-level data such as tfMRI time series, in this work we propose a fast and scalable novel framework for distributed deep Convolutional Autoencoder model. This model aims to both learn the complex hierarchical structure of the tfMRI data and to leverage the processing power of multiple GPUs in a distributed fashion. To implement such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow library, leveraging from a very large cluster of GPU machines. Experimental data from applying the model on the Human Connectome Project (HCP) show that the proposed model is efficient and scalable toward tfMRI big data analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data in the future.", "title": "" }, { "docid": "8fcc03933f2287eb6e6a6d2730d2c0cd", "text": "While virtualization helps to enable multi-tenancy in data centers, it introduces new challenges to the resource management in traditional OSes. We find that one important design in an OS, prioritizing interactive and I/O-bound workloads, can become ineffective in a virtualized OS. Resource multiplexing between multiple tenants breaks the assumption of continuous CPU availability in physical systems and causes two types of priority inversions in virtualized OSes. In this paper, we present xBalloon, a lightweight approach to preserving I/O prioritization. It uses a balloon process in the virtualized OS to avoid priority inversion in both short-term and long-term scheduling. Experiments in a local Xen environment and Amazon EC2 show that xBalloon improves I/O performance in a recent Linux kernel by as much as 136% on network throughput, 95% on disk throughput, and 125x on network tail latency.", "title": "" }, { "docid": "dec89c3035ce2456c23e547252c5824a", "text": "This is a survey of some of the nice properties of the associahedron (also called Stasheff polytope) from several points of views: topological, geometrical, combinatorial and algebraic.", "title": "" }, { "docid": "4cb942fd2549525412b1a49590d4dfbd", "text": "This paper proposes a new adaptive patient-cooperative control strategy for improving the effectiveness and safety of robot-assisted ankle rehabilitation. This control strategy has been developed and implemented on a compliant ankle rehabilitation robot (CARR). The CARR is actuated by four Festo Fluidic muscles located to the calf in parallel, has three rotational degrees of freedom. The control scheme consists of a position controller implemented in joint space and a high-level admittance controller in task space. The admittance controller adaptively modifies the predefined trajectory based on real-time ankle measurement, which enhances the training safety of the robot. Experiments were carried out using different modes to validate the proposed control strategy on the CARR. Three training modes include: 1) a passive mode using a joint-space position controller, 2) a patient–robot cooperative mode using a fixed-parameter admittance controller, and 3) a cooperative mode using a variable-parameter admittance controller. Results demonstrate satisfactory trajectory tracking accuracy, even when externally disturbed, with a maximum normalized root mean square deviation less than 5.4%. These experimental findings suggest the potential of this new patient-cooperative control strategy as a safe and engaging control solution for rehabilitation robots.", "title": "" }, { "docid": "066d3a381ffdb2492230bee14be56710", "text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.", "title": "" }, { "docid": "e86f1f37eac7c2182c5f77c527d8fac6", "text": "Eating members of one's own species is one of the few remaining taboos in modern human societies. In humans, aggression cannibalism has been associated with mental illness. The objective of this report is to examine the unique set of circumstances and characteristics revealing the underlying etiology leading to such an act and the type of psychological effect it has for the perpetrator. A case report of a patient with paranoid schizophrenia who committed patricide and cannibalism is presented. The psychosocial implications of anthropophagy on the particular patient management are outlined.", "title": "" }, { "docid": "1e1721aa0e496c4fef0defdae7050a33", "text": "Performing delicate Minimally Invasive Surgeries (MIS) forces surgeons to accurately assess the position and orientation (pose) of surgical instruments. In current practice, this pose information is provided by conventional tracking systems (optical and electro-magnetic). Two challenges render these systems inadequate for minimally invasive bone surgery: the need for instrument positioning with high precision and occluding tissue blocking the line of sight. Fluoroscopic tracking is limited by the radiation exposure to patient and surgeon. A possible solution is constraining the acquisition of x-ray images. The distinct acquisitions at irregular intervals require a pose estimation solution instead of a tracking technique. We develop i3PosNet (Iterative Image Instrument Pose estimation Network), a patchbased modular Deep Learning method enhanced by geometric considerations, which estimates the pose of surgical instruments from single x-rays. For the evaluation of i3PosNet, we consider the scenario of drilling in the otobasis. i3PosNet generalizes well to different instruments, which we show by applying it to a screw, a drill and a robot. i3PosNet consistently estimates the pose of surgical instruments better than conventional image registration techniques by a factor of 5 and more achieving inplane position errors of 0.031mm±0.025mm and angle errors of 0.031◦ ± 1.126◦. Additional factors, such as depth are evaluated to 0.361mm± 8.98mm from single radiographs.", "title": "" }, { "docid": "c323d805068dfb8414d2e584d5feb642", "text": "In this paper we present a new, morphological criterion for determining whether a geometric solid is suitable for voxelization at a given resolution. The criterion embodies two conditions, namely that the curvature of the solid must be bounded and the critical points of the d istance field must be at a certain distance from the boundary of the solid. For solids that fulfill th is criterion, we present an analytic and an empirical bound for the trilinear reconstruction error. Additionally , we give a theoretical argument as to why the distance field approach to voxelization is more sound than the prefi ltering technique. The essence of the argument is that while sampling and interpolation must always in troduce some error, the latter method (but not the former) introduces an error in the surface positio n independently of the sampling.", "title": "" }, { "docid": "a7e8c3a64f6ba977e142de9b3dae7e57", "text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.", "title": "" }, { "docid": "c9aec633deebe159fa01c7af626d7ae4", "text": "Many tasks in NLP stand to benefit from robust measures of semantic similarity for units above the level of individual words. Rich semantic resources such as WordNet provide local semantic information at the lexical level. However, effectively combining this information to compute scores for phrases or sentences is an open problem. Our algorithm aggregates local relatedness information via a random walk over a graph constructed from an underlying lexical resource. The stationary distribution of the graph walk forms a “semantic signature” that can be compared to another such distribution to get a relatedness score for texts. On a paraphrase recognition task, the algorithm achieves an 18.5% relative reduction in error rate over a vector-space baseline. We also show that the graph walk similarity between texts has complementary value as a feature for recognizing textual entailment, improving on a competitive baseline system.", "title": "" }, { "docid": "17130d2f31980978e3316b800b450ddd", "text": "Automatic question-answering is a classical problem in natural language processing, which aims at designing systems that can automatically answer a question, in the same way as human does. In this work, we propose a deep learning based model for automatic question-answering. First the questions and answers are embedded using neural probabilistic modeling. Then a deep similarity neural network is trained to find the similarity score of a pair of answer and question. Then for each question, the best answer is found as the one with the highest similarity score. We first train this model on a large-scale public question-answering database, and then fine-tune it to transfer to the customer-care chat data. We have also tested our framework on a public question-answering database and achieved very good performance.", "title": "" }, { "docid": "1e4d9d451b3713c9a06a7b0b8cb4e471", "text": "The Web 3.0 is approaching fast and the Online Social Networks (OSNs) are becoming more and more pervasive in today daily activities. A subsequent consequence is that criminals are running at the same speed as technology and most of the time highly sophisticated technological machineries are used by them. Images are often involved in illicit or illegal activities, with it now being fundamental to try to ascertain as much as information on a given image as possible. Today, most of the images coming from the Internet flow through OSNs. The paper analyzes the characteristics of images published on some OSNs. The analysis mainly focuses on how the OSN processes the uploaded images and what changes are made to some of the characteristics, such as JPEG quantization table, pixel resolution and related metadata. The experimental analysis was carried out in June-July 2011 on Facebook, Badoo and Google+. It also has a forensic value: it can be used to establish whether an image has been downloaded from an OSN or not.", "title": "" }, { "docid": "bcefecf766a2af447fb904083d587ffc", "text": "We operate a change of paradigm and hypothesize that keywords are more likely to be found among influential nodes of a graph-ofwords rather than among its nodes high on eigenvector-related centrality measures. To test this hypothesis, we introduce unsupervised techniques that capitalize on graph degeneracy. Our methods strongly and significantly outperform all baselines on two datasets (short and medium size documents), and reach best performance on the third one (long documents).", "title": "" }, { "docid": "85f9eb1b79ba0bc11e275c8a48731e8f", "text": "OBJECTIVES\nThe long-term effects of amino acid-based formula (AAF) in the treatment of cow's milk allergy (CMA) are largely unexplored. The present study comparatively evaluates body growth and protein metabolism in CMA children treated with AAF or with extensively hydrolyzed whey formula (eHWF), and healthy controls.\n\n\nMETHODS\nA 12-month multicenter randomized control trial was conducted in outpatients with CMA (age 5-12 m) randomized in 2 groups, treated with AAF (group 1) and eHWF (group 2), and compared with healthy controls (group 3) fed with follow-on (if age <12 months) or growing-up formula (if age >12 months). At enrolment (T0), after 3 (T3), 6 (T6), and 12 months (T12) a clinical evaluation was performed. At T0 and T3, in subjects with CMA serum levels of albumin, urea, total protein, retinol-binding protein, and insulin-like growth factor 1 were measured.\n\n\nRESULTS\nTwenty-one subjects in group 1 (61.9% boys, age 6.5 ± 1.5 months), 19 in group 2 (57.9% boys, age 7 ± 1.7 months) and 25 subjects in group 3 (48% boys, age 5.5 ± 0.5 months) completed the study. At T0, the weight z score was similar in group 1 (-0.74) and 2 (-0.76), with differences compared to group 3 (-0.17, P < 0.05). At T12, the weight z score value was similar between the 3 groups without significant differences. There were no significant changes in protein metabolism in children in groups 1 and 2.\n\n\nCONCLUSION\nLong-term treatment with AAF is safe and allows adequate body growth in children with CMA.", "title": "" }, { "docid": "8a1a255a338a06c729f586b8c9b513ac", "text": "In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's website. NCBI resources include Entrez, PubMed, PubMed Central, LocusLink, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SARS Coronavirus Resource, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD) and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov.", "title": "" }, { "docid": "7edfde7d7875d88702db2aabc4ac2883", "text": "This paper proposes a novel approach to build integer multiplication circuits based on speculation, a technique which performs a faster-but occasionally wrong-operation resorting to a multi-cycle error correction circuit only in the rare case of error. The proposed speculative multiplier uses a novel speculative carry-save reduction tree using three steps: partial products recoding, partial products partitioning, speculative compression. The speculative tree uses speculative (m:2) counters, with m > 3, that are faster than a conventional tree using full-adders and half-adders. A technique to automatically choose the suitable speculative counters, taking into accounts both error probability and delay, is also presented in the paper. The speculative tree is completed with a fast speculative carry-propagate adder and an error correction circuit. We have synthesized speculative multipliers for several operand lengths using the UMC 65 nm library. Comparisons with conventional multipliers show that speculation is effective when high speed is required. Speculative multipliers allow reaching a higher speed compared with conventional counterparts and are also quite effective in terms of power dissipation, when a high speed operation is required.", "title": "" }, { "docid": "95c535a587344fd0efbd5d9d299b1b98", "text": "We propose a method to integrate feature extraction and prediction as a single optimization task by stacking a three-layer model as a deep learning structure. The first layer of the deep structure is a Long Short Term Memory (LSTM) model which deals with the sequential input data from a group of assets. The output of the LSTM model is followed by meanpooling, and the result is fed to the second layer. The second layer is a neural network layer, which further learns the feature representation. The output of the second layer is connected to a survival model as the third layer for predicting asset health condition. The parameters of the three-layer model are optimized together via stochastic gradient decent. The proposed method was tested on a small dataset collected from a fleet of mining haul trucks. The model resulted in the “individualized” failure probability representation for assessing the health condition of each individual asset, which well separates the in-service and failed trucks. The proposed method was also tested on a large open source hard drive dataset, and it showed promising result.", "title": "" }, { "docid": "e2a678afb38072bb51168aa79d261303", "text": "The rapid evolution of technology has changed the face of education, especially when technology was combined with adequate pedagogical foundations. This combination has created new opportunities for improving the quality of teaching and learning experiences. Until recently, Augmented Reality (AR) is one of the latest technologies that offer a new way to educate. Due to the rising popularity of mobile devices globally, the widespread use of AR on mobile devices such as smartphones and tablets has become a growing phenomenon. Therefore, this paper reviews several literatures concerning the information about mobile augmented reality and exemplify the potentials for education. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey.", "title": "" } ]
scidocsrr
4e8caad36b058ea19d770e83c2b03423
Model-Checking Algorithms for Continuous-Time Markov Chains
[ { "docid": "f8d256bf6fea179847bfb4cc8acd986d", "text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.", "title": "" } ]
[ { "docid": "c7502c4fe6d06993c3075043c0e6a3e7", "text": "Wireless communication applications have driven the development of high-resolution A/D converters (ADCs) with high sample rates, good AC performance and IF sampling capability to enable wider cellular coverage, more carriers, and to simplify the system design. We describe a 16b ADC with a sample rate up to 250MS/s that employs background calibration of the residue amplifier (RA) gain errors. The ADC has an integrated input buffer and is fabricated on a 0.18µm BiCMOS process. When the input buffer is bypassed, the SNR is 77.5dB and the SFDR is 90dB at 10MHz input frequency. With the input buffer, the SNR is 76dB and the SFDR is 95dB. The ADC consumes 850mW from a 1.8V supply, and the input buffer consumes 150mW from a 3V supply. The input span is 2.6Vp-p and the jitter is 60fs.", "title": "" }, { "docid": "44368062de68f6faed57d43b8e691e35", "text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.", "title": "" }, { "docid": "5c2b73276c9f845d7eef5c9dc4cea2a1", "text": "The detection of QR codes, a type of 2D barcode, as described in the literature consists merely in the determination of the boundaries of the symbol region in images obtained with the specific intent of highlighting the symbol. However, many important applications such as those related with accessibility technologies or robotics, depends on first detecting the presence of a barcode in an environment. We employ Viola-Jones rapid object detection framework to address the problem of finding QR codes in arbitrarily acquired images. This framework provides an efficient way to focus the detection process in promising regions of the image and a very fast feature calculation approach for pattern classification. An extensive study of variations in the parameters of the framework for detecting finder patterns, present in three corners of every QR code, was carried out. Detection accuracy superior to 90%, with controlled number of false positives, is achieved. We also propose a post-processing algorithm that aggregates the results of the first step and decides if the detected finder patterns are part of QR code symbols. This two-step processing is done in real time.", "title": "" }, { "docid": "27d1fe251154f094962a919ed48d41f7", "text": "Partially Observable Markov Decision Processes for Spoken Dialogue Management Jason D. Williams The design of robust spoken dialog systems is a significant research challenge. Speech recognition errors are common and hence the state of the conversation can never be known with certainty, and users can react in a variety of ways making deterministic forward planning impossible. This thesis argues that a partially observable Markov decision process (POMDP) provides a principled formalism for modelling human-machine conversation. Further, this thesis introduces the SDS-POMDP framework which enables statistical models of users’ behavior and the speech recognition process to be combined with handcrafted heuristics into a single framework that supports global optimization. A combination of theoretical and empirical studies confirm that the SDS-POMDP framework unifies and extends existing techniques, such as local use of confidence score, maintaining parallel dialog hypotheses, and automated planning. Despite its potential, the SDS-POMDP model faces important scalability challenges, and this thesis next presents two methods for scaling up the SDS-POMDP model to realistically sized spoken dialog systems. First, summary point-based value iteration (SPBVI) enables a single slot (a dialog variable such as a date, time, or location) to take on an arbitrary number of values by restricting the planner to consider only the likelihood of the best hypothesis. Second, composite SPBVI (CSPBVI) enables dialog managers consisting of many slots to be created by planning locally within each slot, and combining these local plans into a global plan using a simple heuristic. Results from dialog simulation show that these techniques enable the SDS-POMDP model to handle real-world dialog problems while continuing to out-perform established techniques and hand-crafted dialog managers. Finally, application to a real spoken dialog system is demonstrated.", "title": "" }, { "docid": "103951fcfead2de24396e7ad81ec0221", "text": "Numerous applications in scientific, medical, and military areas demand robust, compact, sensitive, and fast ultraviolet (UV) detection. Our (Al)GaN photodiodes pose high avalanche gain and single-photon detection efficiency that can measure up to these requirements. Inherit advantage of back-illumination in our devices offers an easier integration and layout packaging via flip-chip hybridization for UV focal plane arrays that may find uses from space applications to hostile-agent detection. Thanks to the recent (Al)GaN material optimization, III-Nitrides, known to have fast carrier dynamics and short relaxation times, are employed in (Al)GaN based superlattices that absorb in near-infrared regime. In this work, we explain the origins of our high performance UV APDs, and employ our (Al)GaN material knowledge for intersubband applications. We also discuss the extension of this material engineering into the far infrared, and even the terahertz (THz) region.", "title": "" }, { "docid": "5bd9b0de217f2a537a5fadf99931d149", "text": "A linear programming (LP) method for security dispatch and emergency control calculations on large power systems is presented. The method is reliable, fast, flexible, easy to program, and requires little computer storage. It works directly with the normal power-system variables and limits, and incorporates the usual sparse matrix techniques. An important feature of the method is that it handles multi-segment generator cost curves neatly and efficiently.", "title": "" }, { "docid": "901fa78a4d06c365d13169859caeae69", "text": "Although the number of cloud projects has dramatically increased over the last few years, ensuring the availability and security of project data, services, and resources is still a crucial and challenging research issue. Distributed denial of service (DDoS) attacks are the second most prevalent cybercrime attacks after information theft. DDoS TCP flood attacks can exhaust the cloud’s resources, consume most of its bandwidth, and damage an entire cloud project within a short period of time. The timely detection and prevention of such attacks in cloud projects are therefore vital, especially for eHealth clouds. In this paper, we present a new classifier system for detecting and preventing DDoS TCP flood attacks (CS_DDoS) in public clouds. The proposed CS_DDoS system offers a solution to securing stored records by classifying the incoming packets and making a decision based on the classification results. During the detection phase, the CS_DDOS identifies and determines whether a packet is normal or originates from an attacker. During the prevention phase, packets, which are classified as malicious, will be denied to access the cloud service and the source IP will be blacklisted. The performance of the CS_DDoS system is compared using the different classifiers of the least squares support vector machine (LS-SVM), naïve Bayes, K-nearest, and multilayer perceptron. The results show that CS_DDoS yields the best performance when the LS-SVM classifier is adopted. It can detect DDoS TCP flood attacks with about 97% accuracy and with a Kappa coefficient of 0.89 when under attack from a single source, and 94% accuracy with a Kappa coefficient of 0.9 when under attack from multiple attackers. Finally, the results are discussed in terms of accuracy and time complexity, and validated using a K-fold cross-validation model.", "title": "" }, { "docid": "5ef37c0620e087d3552499e2b9b4fc84", "text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.", "title": "" }, { "docid": "5c85ce9e0330a2f55ecdaf65b00b517b", "text": "Rectal cancer management benefits from a multidisciplinary approach involving medical and radiation oncology as well as surgery. Presented are the current dominant issues in rectal cancer management with an emphasis on our treatment algorithm at the Lankenau Medical Center. By basing surgical decisions on the downstaged rectal cancer we explore how sphincter preservation can be extended even for cancers of the distal 3 cm of the rectum. TATA and TEM techniques can be used to effectively treat cancer from an oncologic standpoint while maintaining a high quality of life through sphincter preservation and avoidance of a permanent colostomy. We review the results of our efforts, including the use of advanced laparoscopy in the surgical management of low rectal cancers.", "title": "" }, { "docid": "555e3bbc504c7309981559a66c584097", "text": "The hippocampus has been implicated in the regulation of anxiety and memory processes. Nevertheless, the precise contribution of its ventral (VH) and dorsal (DH) division in these issues still remains a matter of debate. The Trial 1/2 protocol in the elevated plus-maze (EPM) is a suitable approach to assess features associated with anxiety and memory. Information about the spatial environment on initial (Trial 1) exploration leads to a subsequent increase in open-arm avoidance during retesting (Trial 2). The objective of the present study was to investigate whether transient VH or DH deactivation by lidocaine microinfusion would differently interfere with the performance of EPM-naive and EPM-experienced rats. Male Wistar rats were bilaterally-implanted with guide cannulas aimed at the VH or the DH. One-week after surgery, they received vehicle or lidocaine 2.0% in 1.0 microL (0.5 microL per side) at pre-Trial 1, post-Trial 1 or pre-Trial 2. There was an increase in open-arm exploration after the intra-VH lidocaine injection on Trial 1. Intra-DH pre-Trial 2 administration of lidocaine also reduced the open-arm avoidance. No significant changes were observed in enclosed-arm entries, an EPM index of general exploratory activity. The cautious exploration of potentially dangerous environment requires VH functional integrity, suggesting a specific role for this region in modulating anxiety-related behaviors. With regard to the DH, it may be preferentially involved in learning and memory since the acquired response of inhibitory avoidance was no longer observed when lidocaine was injected pre-Trial 2.", "title": "" }, { "docid": "3f98e2683b83a7312dc4dd6bf1f717aa", "text": "How do comments on student writing from peers compare to those from subject-matter experts? This study examined the types of comments that reviewers produce as well as their perceived helpfulness. Comments on classmates’ papers were collected from two undergraduate and one graduate-level psychology course. The undergraduate papers in one of the courses were also commented on by an independent psychology instructor experienced in providing feedback to students on similar writing tasks. The comments produced by students at both levels were shorter than the instructor’s. The instructor’s comments were predominantly directive and rarely summative. The undergraduate peers’ comments were more mixed in type; directive and praise comments were the most frequent. Consistently, undergraduate peers found directive and praise comments helpful. The helpfulness of the directive comments was also endorsed by a writing expert.", "title": "" }, { "docid": "c3131be444d316b57baa685067282fef", "text": "In general, executive function can be thought of as the set of abilities required to effortfully guide behavior toward a goal, especially in nonroutine situations. Psychologists are interested in expanding the understanding of executive function because it is thought to be a key process in intelligent behavior, it is compromised in a variety of psychiatric and neurological disorders, it varies across the life span, and it affects performance in complicated environments, such as the cockpits of advanced aircraft. This article provides a brief introduction to the concept of executive function and discusses how it is assessed and the conditions under which it is compromised. A short overview of the diverse theoretical viewpoints regarding its psychological and biological underpinnings is also provided. The article concludes with a consideration of how a multilevel approach may provide a more integrated account of executive function than has been previously available. KEYWORDS—executive function; frontal lobe; prefrontal cortex; inhibition; task switching; working memory; attention; top-down control Like other psychological constructs, such as memory, executive function is multidimensional. As such, there exists a variety of models that provide varying viewpoints as to its basic component processes. Nonetheless, common across most of them is the idea that executive function is a process used to effortfully guide behavior toward a goal, especially in nonroutine situations. Various functions or abilities are thought to fall under the rubric of executive function. These include prioritizing and sequencing behavior, inhibiting familiar or stereotyped behaviors, creating and maintaining an idea of what task or information is most relevant for current purposes (often referred to as an attentional or mental set), providing resistance to information that is distracting or task irrelevant, switching between task goals, utilizing relevant information in support of decision making, categorizing or otherwise abstracting common elements across items, and handling novel information or situations. As can be seen from this list, the functions that fall under the category of executive function are indeed wide ranging. ASSESSING EXECUTIVE FUNCTION The very nature of executive function makes it difficult to measure in the clinic or the laboratory; it involves an individual guiding his or her behavior, especially in novel, unstructured, and nonroutine situations that require some degree of judgment. In contrast, standard testing situations are structured—participants are explicitly told what the task is, given rules for performing the task, and provided with information on task constraints (e.g., time limits). Since executive function covers a wide domain of skills, there is no single agreed-upon ‘‘gold standard’’ test of executive function. Rather, different tasks are typically used to assess its different facets. One classic test often used to assess the compromise of executive function after brain injury is the Wisconsin Card Sorting Test. This task is thought to measure a variety of executive subprocesses, including the ability to infer the categories that should guide behavior, the ability to create an attentional set based on those abstract categories, and the ability to switch one’s attentional set as task demands change. Briefly, individuals must deduce from the experimenter’s response the rule by which the cards should be sorted (rather than being told the rule explicitly; see Fig. 1a). After the initial rule is learned successfully, the examiner changes the rule without informing the individual. At this point the old rule must be rejected, the new rule discovered, and a switch made from the old rule to the new. The ability to exhibit such flexible readjustment of behavior is a cardinal characteristic of executive function. Individuals with frontal lobe damage and children younger than 4 years (who are typically tested on a two-dimensional version of the sorting task) tend to persist in sorting items according to the previous and now inappropriate rule. Address correspondence to Marie Banich, Director, Institute of Cognitive Science, University of Colorado at Boulder, UCB 0344, Boulder, CO 80305; e-mail: marie.banich@colorado.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 18—Number 2 89 Copyright r 2009 Association for Psychological Science Cognitive psychologists have attempted to disentangle the different executive subprocesses that underlie performance on the Wisconsin Card Sorting Test, as well as to identify other executive subprocesses. For example, the ability to switch mental sets has been studied by presenting individuals with multidimensional stimuli (e.g., a colored numeral) along with a cue that indicates the attribute on which a response should be based (e.g., color, or whether the number is odd or even). Individuals are slower to respond and make more errors on trials requiring a task switch (e.g., categorize by color preceded by categorize by odd/even) than they do on those that do not (e.g., categorize by color preceded by categorize by color), indicating that task switching requires executive control (Monsell, 2003). In other executive tasks, decisions must be based on taskrelevant information in the face of distracting information. One such measure of this ability is the Stroop task, in which a word’s color must be identified while ignoring the word itself. Since word reading is more automatic than color naming, executive control is required to override the tendency to read or to respond on the basis of the word rather than the ink color. The need for such control is reflected in slower responses when the word names a competing ink color (e.g., the word ‘‘red’’ printed in blue ink) than when it does not (e.g., the word ‘‘sum’’ in red ink or the word ‘‘red’’ in red ink). Other tasks, such as the Tower of London task, examine the ability to plan and sequence behavior towards a goal. In this task, a start state and a goal state are shown, and the individual must determine the shortest number of moves required to get the balls from the starting state to the goal state (see Fig. 1b). An inability to solve the problems, taking more steps than necessary, and/or impulsively starting to move the balls before planning are all symptoms of executive dysfunction on this task. THE COMPROMISE OF EXECUTIVE FUNCTION Psychologists are interested in executive function because it is critical for self-directed behavior, so much so that the greater the decrement in executive function after brain damage, the poorer the ability to live independently (Hanks, Rapport, Millis, & Deshpande, 1999). Normal children, adolescents, and older adults also show decrements in executive function. Most notable in children is their perseveration when required to switch tasks. Although they can correctly answer questions about what they should do, they nonetheless are often unable to produce the correct motor response (Zelazo, Fyre, & Rapus, 1996). Similarly, parents often wonder why teenagers take risks and make imprudent decisions even though they seem to ‘‘know’’ better. This demonstrated knowledge about abstract rules coupled with an inability to implement them, especially in the face of distracting or conflicting information, is reminiscent of that observed in children. The ability to plan ahead in multistep processes, to learn about contingences between reward and punishment in multifaceted decision-making tasks, and to exert inhibitory control and reduce impulsive behavior continues to increase during the teenage years and, in fact, well into the early 20s (Steinberg, 2007). Executive function is also the cognitive ability most affected by aging (e.g., Treitz, Heyder, & Daum, 2007), with even more severe decline associated with mild cognitive impairment and Alzheimer’s disease. Finally, executive function is compromised across a large number of psychiatric illnesses, including schizophrenia, bipolar disorder, ?", "title": "" }, { "docid": "8b054ce1961098ec9c7d66db33c53abd", "text": "This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the accuracy of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are still considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.", "title": "" }, { "docid": "65dc64d7ea66d8c1a37c668741967496", "text": "Recently, path norm was proposed as a new capacity measure for neural networks with Rectified Linear Unit (ReLU) activation function, which takes the rescaling-invariant property of ReLU into account. It has been shown that the generalization error bound in terms of the path norm explains the empirical generalization behaviors of the ReLU neural networks better than that of other capacity measures. Moreover, optimization algorithms which take path norm as the regularization term to the loss function, like Path-SGD, have been shown to achieve better generalization performance. However, the path norm counts the values of all paths, and hence the capacity measure based on path norm could be improperly influenced by the dependency among different paths. It is also known that each path of a ReLU network can be represented by a small group of linearly independent basis paths with multiplication and division operation, which indicates that the generalization behavior of the network only depends on only a few basis paths. Motivated by this, we propose a new norm Basis-path Norm based on a group of linearly independent paths to measure the capacity of neural networks more accurately. We establish a generalization error bound based on this basis path norm, and show it explains the generalization behaviors of ReLU networks more accurately than previous capacity measures via extensive experiments. In addition, we develop optimization algorithms which minimize the empirical risk regularized by the basis-path norm. Our experiments on benchmark datasets demonstrate that the proposed regularization method achieves clearly better performance on the test set than the previous regularization approaches.", "title": "" }, { "docid": "cfc22740e3042814356b3104a2ece892", "text": "An X-band pulsed solid-state power amplifier (PSSPA) with high output power and high power added efficiency (PAE) is reported in this article. The high power amplifier (HPA) was implemented by a cascade approach, including an MMIC driving amplifier, an internally matched medium-power and a high-power GaAs FET. To achieve optimum electrical performance of the proposed PSSPA, some considerations of the Grounding, DC Blocking Circuit, bias network, microwave absorber, and the isolation blocks, have been taken in our design. Under the pulse condition of 8 KHz pulse repeat frequency (PRF) and 10% of duty cycle, the pulse output power ranges between 45.8 and 46.6 dBm, and the PAE varies between 35.8% and 40.5% from 9.5 to 10.5 GH.", "title": "" }, { "docid": "70294e6680ad7d662596262c4068a352", "text": "As cancer development involves pathological vessel formation, 16 angiogenesis markers were evaluated as potential ovarian cancer (OC) biomarkers. Blood samples collected from 172 patients were divided based on histopathological result: OC (n = 38), borderline ovarian tumours (n = 6), non-malignant ovarian tumours (n = 62), healthy controls (n = 50) and 16 patients were excluded. Sixteen angiogenesis markers were measured using BioPlex Pro Human Cancer Biomarker Panel 1 immunoassay. Additionally, concentrations of cancer antigen 125 (CA125) and human epididymis protein 4 (HE4) were measured in patients with adnexal masses using electrochemiluminescence immunoassay. In the comparison between OC vs. non-OC, osteopontin achieved the highest area under the curve (AUC) of 0.79 (sensitivity 69%, specificity 78%). Multimarker models based on four to six markers (basic fibroblast growth factor-FGF-basic, follistatin, hepatocyte growth factor-HGF, osteopontin, platelet-derived growth factor AB/BB-PDGF-AB/BB, leptin) demonstrated higher discriminatory ability (AUC 0.80-0.81) than a single marker (AUC 0.79). When comparing OC with benign ovarian tumours, six markers had statistically different expression (osteopontin, leptin, follistatin, PDGF-AB/BB, HGF, FGF-basic). Osteopontin was the best single angiogenesis marker (AUC 0.825, sensitivity 72%, specificity 82%). A three-marker panel consisting of osteopontin, CA125 and HE4 better discriminated the groups (AUC 0.958) than HE4 or CA125 alone (AUC 0.941 and 0.932, respectively). Osteopontin should be further investigated as a potential biomarker in OC screening and differential diagnosis of ovarian tumours. Adding osteopontin to a panel of already used biomarkers (CA125 and HE4) significantly improves differential diagnosis between malignant and benign ovarian tumours.", "title": "" }, { "docid": "8d7e778331feccc94a730b6cf21a2063", "text": "Data mining is a process of inferring knowledge from such huge data. Data Mining has three major components Clustering or Classification, Association Rules and Sequence Analysis. By simple definition, in classification/clustering analyze a set of data and generate a set of grouping rules which can be used to classify future data. Data mining is the process is to extract information from a data set and transform it into an understandable structure. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns. Data mining involves six common classes of tasks. Anomaly detection, Association rule learning, Clustering, Classification, Regression, Summarization. Classification is a major technique in data mining and widely used in various fields. Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree induction, Bayesian networks, k-nearest neighbor classifier, the goal of this study is to provide a comprehensive review of different classification techniques in data mining.", "title": "" }, { "docid": "fc26ebb8329c84d96a714065117dda02", "text": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.", "title": "" }, { "docid": "d272cf01340c8dcc3c24651eaf876926", "text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.", "title": "" }, { "docid": "5fee56839283d60178da7d5e278c4ef4", "text": "BACKGROUND\nMost obstetric complications occur unpredictably during the time of delivery, but they can be prevented with proper medical care in the health facilities. Despite the Ethiopian government's efforts to expand health service facilities and promote health institution-based delivery service in the country, an estimated 85% of births still take place at home.\n\n\nOBJECTIVE\nThe review was conducted with the aim of generating the best evidence on the determinants of institutional delivery service utilization in Ethiopia.\n\n\nMETHODS\nThe reviewed studies were accessed through electronic web-based search strategy from PubMed, HINARI, Mendeley reference manager, Cochrane Library for Systematic Reviews, and Google Scholar. Review Manager V5.3 software was used for meta-analysis. Mantel-Haenszel odds ratios (ORs) and their 95% confidence intervals (CIs) were calculated. Heterogeneity of the study was assessed using I (2) test.\n\n\nRESULTS\nPeople living in urban areas (OR =13.16, CI =1.24, 3.68), with primary and above educational level of the mother and husband (OR =4.95, CI =2.3, 4. 8, and OR =4.43, CI =1.14, 3.36, respectively), who encountered problems during pregnancy (OR =2.83, CI =4.54, 7.39), and living at a distance <5 km from nearby health facility (OR =2.6, CI =3.33, 6.57) showed significant association with institutional delivery service utilization. Women's autonomy was not significantly associated with institutional delivery service utilization.\n\n\nCONCLUSION AND RECOMMENDATION\nDistance to health facility and problems during pregnancy were factors positively and significantly associated with institutional delivery service utilization. Promoting couples education beyond primary education regarding the danger signs of pregnancy and benefits of institutional delivery through available communication networks such as health development army and promotion of antenatal care visits and completion of four standard visits by pregnant women were recommended.", "title": "" } ]
scidocsrr
12eeaab39f4c91492c826383ffa486bb
Prediction of Financial Performance Using Genetic Algorithm and Associative Rule Mining
[ { "docid": "ec58ee349217d316f87ff684dba5ac2b", "text": "This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.", "title": "" } ]
[ { "docid": "1114300ff9cab6dc29e80c4d22e45e1e", "text": "Single- and dual-feed, dual-frequency, low-profile antennas with independent tuning using varactor diodes have been demonstrated. The dual-feed planar inverted F-antenna (PIFA) has two operating frequencies which can be independently tuned from 0.7 to 1.1 GHz and from 1.7 to 2.3 GHz with better than -10 dB impedance match. The isolation between the high-band and the low-band ports is >13 dB; hence, one resonant frequency can be tuned without affecting the other. The single-feed antenna has two resonant frequencies, which can be independently tuned from 1.2 to 1.6 GHz and from 1.6 to 2.3 GHz with better than -10 dB impedance match for most of the tuning range. The tuning is done using varactor diodes with a capacitance range from 0.8 to 3.8 pF, which is compatible with RF MEMS devices. The antenna volumes are 63 × 100 × 3.15 mm3 on er = 3.55 substrates and the measured antenna efficiencies vary between 25% and 50% over the tuning range. The application areas are in carrier aggregation systems for fourth generation (4G) wireless systems.", "title": "" }, { "docid": "50ef3775f9d18fe368c166cfd3ff2bca", "text": "In many applications that track and analyze spatiotemporal data, movements obey periodic patterns; the objects follow the same routes (approximately) over regular time intervals. For example, people wake up at the same time and follow more or less the same route to their work everyday. The discovery of hidden periodic patterns in spatiotemporal data, apart from unveiling important information to the data analyst, can facilitate data management substantially. Based on this observation, we propose a framework that analyzes, manages, and queries object movements that follow such patterns. We define the spatiotemporal periodic pattern mining problem and propose an effective and fast mining algorithm for retrieving maximal periodic patterns. We also devise a novel, specialized index structure that can benefit from the discovered patterns to support more efficient execution of spatiotemporal queries. We evaluate our methods experimentally using datasets with object trajectories that exhibit periodicity.", "title": "" }, { "docid": "5c4c265df2d24350340eb956191417ae", "text": "When a remotely sited wind farm is connected to the utility power system through a distribution line, the overcurrent relay at the common coupling point needs a directional feature. This paper presents a method for estimating the direction of fault in such radial distribution systems using phase change in current. The difference in phase angle between the positive-sequence component of the current during fault and prefault conditions has been found to be a good indicator of the fault direction in a three-phase system. A rule base formed for the purpose decides the location of fault with respect to the relay in a distribution system. Such a strategy reduces the cost of the voltage sensor and/or connection for a protection scheme which is of relevance in emerging distributed-generation systems. The algorithm has been tested through simulation for different radial distribution systems.", "title": "" }, { "docid": "2b8aa68835bc61f3d0b5da39441185c9", "text": "This position paper explores the threat to individual privacy due to the widespread use of consumer drones. Present day consumer drones are equipped with sensors such as cameras and microphones, and their types and numbers can be well expected to increase in future. Drone operators have absolute control on where the drones fly and what the on-board sensors record with no options for bystanders to protect their privacy. This position paper proposes a policy language that allows homeowners, businesses, governments, and privacy-conscious individuals to specify location access-control for drones, and discusses how these policy-based controls might be realized in practice. This position paper also explores the potential future problem of managing consumer drone traffic that is likely to emerge with increasing use of consumer drones for various tasks. It proposes a privacy preserving traffic management protocol for directing drones towards their respective destinations without requiring drones to reveal their destinations.", "title": "" }, { "docid": "c86e4bf0577f49d6d4384379651c7d9a", "text": "The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.", "title": "" }, { "docid": "16987d81cd90db3c0abe2631de9e737c", "text": "Docker containers are becoming an attractive implementation choice for next-generation microservices-based applications. When provisioning such an application, container (microservice) instances need to be created from individual container images. Starting a container on a node, where images are locally available, is fast but it may not guarantee the quality of service due to insufficient resources. When a collection of nodes are available, one can select a node with sufficient resources. However, if the selected node does not have the required image, downloading the image from a different registry increases the provisioning time. Motivated by these observations, in this paper, we present CoMICon, a system for co-operative management of Docker images among a set of nodes. The key features of CoMICon are: (1) it enables a co-operative registry among a set of nodes, (2) it can store or delete images partially in the form of layers, (3) it facilitates the transfer of image layers between registries, and (4) it enables distributed pull of an image while starting a container. Using these features, we describe—(i) high availability management of images and (ii) provisioning management of distributed microservices based applications. We extensively evaluate the performance of CoMICon using 142 real, publicly available images from Docker hub. In contrast to state-of-the-art full image based approach, CoMICon can increase the number of highly available images up to 3x while reducing the application provisioning time by 28% on average.", "title": "" }, { "docid": "685a9dfa265a6c2ce5a9c56e1e193800", "text": "It has been postulated that bilingualism may act as a cognitive reserve and recent behavioral evidence shows that bilinguals are diagnosed with dementia about 4-5 years later compared to monolinguals. In the present study, we investigated the neural basis of these putative protective effects in a group of aging bilinguals as compared to a matched monolingual control group. For this purpose, participants completed the Erikson Flanker task and their performance was correlated to gray matter (GM) volume in order to investigate if cognitive performance predicts GM volume specifically in areas affected by aging. We performed an ex-Gaussian analysis on the resulting RTs and report that aging bilinguals performed better than aging monolinguals on the Flanker task. Bilingualism was overall associated with increased GM in the ACC. Likewise, aging induced effects upon performance correlated only for monolinguals to decreased gray matter in the DLPFC. Taken together, these neural regions might underlie the benefits of bilingualism and act as a neural reserve that protects against the cognitive decline that occurs during aging.", "title": "" }, { "docid": "154528ab93e89abe965b6abd93af6a13", "text": "We investigate the geometry of that function in the plane or 3-space, which associates to each point the square of the shortest distance to a given curve or surface. Particular emphasis is put on second order Taylor approximants and other local quadratic approximants. Their key role in a variety of geometric optimization algorithms is illustrated at hand of registration in Computer Vision and surface approximation.", "title": "" }, { "docid": "7faed0b112a15a3b53c94df44a1bcb26", "text": "Since the stability of the method of fundamental solutions (MFS) is a severe issue, the estimation on the bounds of condition number Cond is important to real application. In this paper, we propose the new approaches for deriving the asymptotes of Cond, and apply them for the Dirichlet problem of Laplace’s equation, to provide the sharp bound of Cond for disk domains. Then the new bound of Cond is derived for bounded simply connected domains with mixed types of boundary conditions. Numerical results are reported for Motz’s problem by adding singular functions. The values of Cond grow exponentially with respect to the number of fundamental solutions used. Note that there seems to exist no stability analysis for the MFS on non-disk (or non-elliptic) domains. Moreover, the expansion coefficients obtained by the MFS are oscillatingly large, to cause the other kind of instability: subtraction cancelation errors in the final harmonic solutions.", "title": "" }, { "docid": "a9000262e389ba8ab09f8d6cd2b2b60a", "text": "CONTEXT\nCaffeine, often in the form of coffee, is frequently used as a supplement by athletes in an attempt to facilitate improved performance during exercise.\n\n\nPURPOSE\nTo investigate the effectiveness of coffee ingestion as an ergogenic aid prior to a 1-mile (1609 m) race.\n\n\nMETHODS\nIn a double-blind, randomized, cross-over, and placebo-controlled design, 13 trained male runners completed a 1-mile race 60 minutes following the ingestion of 0.09 g·kg-1 coffee (COF), 0.09 g·kg-1 decaffeinated coffee (DEC), or a placebo (PLA). All trials were dissolved in 300 mL of hot water.\n\n\nRESULTS\nThe race completion time was 1.3% faster following the ingestion of COF (04:35.37 [00:10.51] min:s.ms) compared with DEC (04:39.14 [00:11.21] min:s.ms; P = .018; 95% confidence interval [CI], -0.11 to -0.01; d = 0.32) and 1.9% faster compared with PLA (04:41.00 [00:09.57] min:s.ms; P = .006; 95% CI, -0.15 to -0.03; d = 0.51). A large trial and time interaction for salivary caffeine concentration was observed (P < .001; [Formula: see text]), with a very large increase (6.40 [1.57] μg·mL-1; 95% CI, 5.5-7.3; d = 3.86) following the ingestion of COF. However, only a trivial difference between DEC and PLA was observed (P = .602; 95% CI, -0.09 to 0.03; d = 0.17). Furthermore, only trivial differences were observed for blood glucose (P = .839; [Formula: see text]) and lactate (P = .096; [Formula: see text]) and maximal heart rate (P = .286; [Formula: see text]) between trials.\n\n\nCONCLUSIONS\nThe results of this study show that 60 minutes after ingesting 0.09 g·kg-1 of caffeinated coffee, 1-mile race performance was enhanced by 1.9% and 1.3% compared with placebo and decaffeinated coffee, respectively, in trained male runners.", "title": "" }, { "docid": "3085d2de614b6816d7a66cb62823824e", "text": "Plastic debris is known to undergo fragmentation at sea, which leads to the formation of microscopic particles of plastic; the so called 'microplastics'. Due to their buoyant and persistent properties, these microplastics have the potential to become widely dispersed in the marine environment through hydrodynamic processes and ocean currents. In this study, the occurrence and distribution of microplastics was investigated in Belgian marine sediments from different locations (coastal harbours, beaches and sublittoral areas). Particles were found in large numbers in all samples, showing the wide distribution of microplastics in Belgian coastal waters. The highest concentrations were found in the harbours where total microplastic concentrations of up to 390 particles kg(-1) dry sediment were observed, which is 15-50 times higher than reported maximum concentrations of other, similar study areas. The depth profile of sediment cores suggested that microplastic concentrations on the beaches reflect the global plastic production increase.", "title": "" }, { "docid": "ecb1373e28c1e68a13727be15484e785", "text": "A wideband, pattern-reconfigurable antenna is reported that is, for example, a good candidate for ceiling-mounted indoor wireless systems. Switchable linearly polarized broadside and conical radiation patterns are achieved by systematically integrating a wideband low-profile monopolar patch antenna with a wideband L-probe fed patch antenna. The monopolar patch acts as the ground for the L-probe fed patch, which is fed with a coaxial cable that replaces one shorting via of the monopolar patch to avoid deterioration of the conical-beam pattern. A simple switching feed network facilitates the pattern reconfigurability. A prototype was fabricated and tested. The measured results confirm the predicted wideband radiation performance. The operational impedance bandwidth, i.e., |S 11| ≤  −10 dB, is obtained as the overlap of the bands associated with both pattern modalities. It is wide, from 2.25 to 2.85 GHz (23.5%). Switchable broadside and conical radiation patterns are observed across this entire operating bandwidth. The peak measured gain was 8.2 dBi for the broadside mode and 6.9 dBi for the conical mode. The overall profile of this antenna is 0.13λ 0 at its lowest operating frequency.", "title": "" }, { "docid": "a9372375af0500609b7721120181c280", "text": "Copyright © 2014 Alicia Garcia-Falgueras. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Alicia Garcia-Falgueras. All Copyright © 2014 are guarded by law and by SCIRP as a guardian.", "title": "" }, { "docid": "87b7b05c6af2fddb00f7b1d3a60413c1", "text": "Mobile crowdsensing (MCS) is a human-driven Internet of Things service empowering citizens to observe the phenomena of individual, community, or even societal value by sharing sensor data about their environment while on the move. Typical MCS service implementations utilize cloud-based centralized architectures, which consume a lot of computational resources and generate significant network traffic, both in mobile networks and toward cloud-based MCS services. Mobile edge computing (MEC) is a natural choice to distribute MCS solutions by moving computation to network edge, since an MEC-based architecture enables significant performance improvements due to the partitioning of problem space based on location, where real-time data processing and aggregation is performed close to data sources. This in turn reduces the associated traffic in mobile core and will facilitate MCS deployments of massive scale. This paper proposes an edge computing architecture adequate for massive scale MCS services by placing key MCS features within the reference MEC architecture. In addition to improved performance, the proposed architecture decreases privacy threats and permits citizens to control the flow of contributed sensor data. It is adequate for both data analytics and real-time MCS scenarios, in line with the 5G vision to integrate a huge number of devices and enable innovative applications requiring low network latency. Our analysis of service overhead introduced by distributed architecture and service reconfiguration at network edge performed on real user traces shows that this overhead is controllable and small compared with the aforementioned benefits. When enhanced by interoperability concepts, the proposed architecture creates an environment for the establishment of an MCS marketplace for bartering and trading of both raw sensor data and aggregated/processed information.", "title": "" }, { "docid": "1efeab8c3036ad5ec1b4dc63a857b392", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "120befb9cfd02d522ef807269ffc4c66", "text": "Reading text in natural images has focused again the attention of many researchers during the last few years due to the increasingly availability of cheap image-capturing devices in low-cost products like mobile phones. Therefore, as text can be found on any environment, the applicability of text-reading systems is really extensive. For this purpose, we present in this paper a robust method to read text in natural images. It is composed of two main separated stages. Firstly, text is located in the image using a set of simple and fast-tocompute features highly discriminative between character and non-character objects. They are based on geometric and gradient properties. The second part of the system carries out the recognition of the previously detected text. It uses gradient features to recognize single characters and Dynamic Programming (DP) to correct misspelled words. Experimental results obtained with different challenging datasets show that the proposed system exceeds state-of-the-art performance, both in terms of localization and recognition.", "title": "" }, { "docid": "1fa03fade33e24a7553e761f2412b688", "text": "Trace chemical detection is important for a wide range of practical applications. Recently emerged two-dimensional (2D) crystals offer unique advantages as potential sensing materials with high sensitivity, owing to their very high surface-to-bulk atom ratios and semiconducting properties. Here, we report the first use of Schottky-contacted chemical vapor deposition grown monolayer MoS2 as high-performance room temperature chemical sensors. The Schottky-contacted MoS2 transistors show current changes by 2-3 orders of magnitude upon exposure to very low concentrations of NO2 and NH3. Specifically, the MoS2 sensors show clear detection of NO2 and NH3 down to 20 ppb and 1 ppm, respectively. We attribute the observed high sensitivity to both well-known charger transfer mechanism and, more importantly, the Schottky barrier modulation upon analyte molecule adsorption, the latter of which is made possible by the Schottky contacts in the transistors and is not reported previously for MoS2 sensors. This study shows the potential of 2D semiconductors as high-performance sensors and also benefits the fundamental studies of interfacial phenomena and interactions between chemical species and monolayer 2D semiconductors.", "title": "" }, { "docid": "3a3f3e1c0eac36d53a40d7639c3d65cc", "text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.", "title": "" }, { "docid": "94c475dea38adf1f2e3af8b9c7a9bc40", "text": "The Mining Software Repositories (MSR) research community has grown significantly since the first MSR workshop was held in 2004. As the community continues to broaden its scope and deepens its expertise, it is worthwhile to reflect on the best practices that our community has developed over the past decade of research. We identify these best practices by surveying past MSR conferences and workshops. To that end, we review all 117 full papers published in the MSR proceedings between 2004 and 2012. We extract 268 comments from these papers, and categorize them using a grounded theory methodology. From this evaluation, four high-level themes were identified: data acquisition and preparation, synthesis, analysis, and sharing/replication. Within each theme we identify several common recommendations, and also examine how these recommendations have evolved over the past decade. In an effort to make this survey a living artifact, we also provide a public forum that contains the extracted recommendations in the hopes that the MSR community can engage in a continuing discussion on our evolving best practices.", "title": "" } ]
scidocsrr
a4a74d88724099d063e0767f32505a01
Vision System for AGI: Problems and Directions
[ { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "5cb704f9980a9d28da4cabd903bf1699", "text": "The ability for an agent to localize itself within an environment is crucial for many real-world applications. For unknown environments, Simultaneous Localization and Mapping (SLAM) enables incremental and concurrent building of and localizing within a map. We present a new, differentiable architecture, Neural Graph Optimizer, progressing towards a complete neural network solution for SLAM by designing a system composed of a local pose estimation model, a novel pose selection module, and a novel graph optimization process. The entire architecture is trained in an end-to-end fashion, enabling the network to automatically learn domain-specific features relevant to the visual odometry and avoid the involved process of feature engineering. We demonstrate the effectiveness of our system on a simulated 2D maze and the 3D ViZ-Doom environment.", "title": "" } ]
[ { "docid": "d055902aa91efacb35a204132c51a68e", "text": "This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is modelindependent.", "title": "" }, { "docid": "cc9c9720b223ff1d433758bce11a373a", "text": "or to skim the text of the article quickly, while academics are more likely to download and print the paper. Further research investigating the ratio between HTML views and PDF downloads could uncover interesting findings about how the public interacts with the open access (OA) research literature. Scholars In addition to tracking scholarly impacts on traditionally invisible audiences, altmetrics hold potential for tracking previously hidden scholarly impacts. Faculty of 1000 Faculty of 1000 (F1000) is a service publishing reviews of important articles, as adjudged by a core “faculty” of selected scholars. Wets, Weedon, and Velterop (2003) argue that F1000 is valuable because it assesses impact at the article level, and adds a human level assessment that statistical indicators lack. Others disagree (Nature Neuroscience, 2005), pointing to a very strong correlation (r = 0.93) between F1000 score and Journal Impact Factor. This said, the service has clearly demonstrated some value, as over two thirds of the world’s top research institutions pay the annual subscription fee to use F1000 (Wets et al., 2003). Moreover, F1000 has been to shown to spot valuable articles which “sole reliance on bibliometric indicators would have led [researchers] to miss” (Allen, Jones, Dolby, Lynn, & Walport, 2009, p. 1). In the PLoS dataset, F1000 recommendations were not closely associated with citation or other altmetrics counts, and formed their own factor in factor analysis, suggesting they track a relatively distinct sort of impact. Conversation (scholarly blogging) In this context, “scholarly blogging” is distinguished from its popular counterpart by the expertise and qualifications of the blogger. While a useful distinction, this is inevitably an imprecise one. One approach has been to limit the investigation to science-only aggregators like ResearchBlogging (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Academic blogging has grown steadily in visibility; academics have blogged their dissertations (Efimova, 2009), and the ranks of academic bloggers contain several Fields Medalists, Nobel laureates, and other eminent scholars (Nielsen, 2009). Economist and Nobel laureate Paul Krugman (Krugman, 2012), himself a blogger, argues that blogs are replacing the working-paper culture that has in turn already replaced economics journals as distribution tools. Given its importance, there have been surprisingly few altmetrics studies of scholarly blogging. Extant research, however, has shown that blogging shares many of the characteristics of more formal communication, including a long-tail distribution of cited articles (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Although science bloggers can write anonymously, most blog under their real names (Shema & Bar-Ilan, 2011). Conversation (Twitter) Scholars on Twitter use the service to support different activities, including teaching (Dunlap & Lowenthal, 2009; Junco, Heiberger, & Loken, 2011), participating in conferences (Junco et al., 2011; Letierce et al., 2010; Ross et al., 2011), citing scholarly articles (Priem & Costello, 2010; Weller, Dröge, & Puschmann, 2011), and engaging in informal communication (Ross et al., 2011; Zhao & Rosson, 2009). Citations from Twitter are a particularly interesting data source, since they capture the sort of informal discussion that accompanies early important work. There is, encouragingly, evidence that Tweeting scholars take citations from Twitter seriously, both in creating and reading them (Priem & Costello, 2010). The number of scholars on Twitter is growing steadily, as shown in Figure 1. The same study found that, in a sample of around 10,000 Ph.D. students and faculty members at five representative universities, one 1 in 40 scholars had an active Twitter account. Although some have suggested that Twitter is only used by younger scholars, rank was not found to significantly associate with Twitter use, and in fact faculty members’ tweets were twice as likely to discuss their and others’ scholarly work. Conversation (article commenting) Following the lead of blogs and other social media platforms, many journals have added article-level commenting to their online platforms in the middle of the last decade. In theory, the discussion taking place in these threads is another valuable lens into the early impacts of scientific ideas. In practice, however, many commenting systems are virtual ghost towns. In a sample of top medical journals, fully half had commenting systems laying idle, completely unused by anyone (Schriger, Chehrazi, Merchant, & Altman, 2011). However, commenting was far from universally unsuccessful; several journals had comments on 50-76% of their articles. In a sample from the British Medical Journal, articles had, on average, nearly five comments each (Gotzsche, Delamothe, Godlee, & Lundh, 2010). Additionally, many articles may accumulate comments in other environments; the growing number of external comment sites allows users to post comments on journal articles published elsewhere. These have tended to appear and disappear quickly over the last few years. Neylon (2010) argues that online article commenting is thriving, particularly for controversial papers, but that \"...people are much more comfortable commenting in their own spaces” (para. 5), like their blogs and on Twitter. Reference managers Reference managers like Mendeley and CiteULike are very useful sources of altmetrics data and are currently among the most studied. Although scholars have used electronic reference managers for some time, this latest generation offers scientometricians the chance to query their datasets, offering a compelling glimpse into scholars’ libraries. It is worth summarizing three main points, though. First, the most important social reference managers are CiteULike and Mendeley. Another popular reference manager, Zotero, has received less study (but see Lucas, 2008). Papers and ReadCube are newer, smaller reference managers; Connotea and 2Collab both dealt poorly with spam; the latter has closed, and the former may follow. Second, the usage base of social reference managers—particularly Mendeley—is large and growing rapidly. Mendeley’s coverage, in particular, rivals that of commercial databases like Scopus and Web of Science (WoS) (Bar-Ilan et al., 2012; Haustein & Siebenlist, 2011; Li et al., 2011; Priem et al., 2012). Finally, inclusion in reference managers correlates to citation more strongly than most other altmetrics. Working with various datasets, researchers have reported correlations of .46 (Bar-Ilan, 2012), .56 (Li et al., 2011), and .5 (Priem et al., 2012) between inclusion in users’ Mendeley libraries, and WoS citations. This closer relationship is likely because of the importance of reference managers in the citation workflow. However, the lack of perfect or even strong correlation suggests that this altmetric, too, captures influence not reflected in the citation record. There has been particular interest in using social bookmarking for recommendations (Bogers & van den Bosch, 2008; Jiang, He, & Ni, 2011). pdf downloads As discussed earlier, most research on downloads today does not distinguish between HTML views in PDF downloads. However there is a substantial and growing body of research investigating article downloads, and their relation to later citation. Several researchers have found that downloads predict or correlate with later citation (Perneger, 2004; Brody et al., 2006). The MESUR project is the largest of these studies to date, and used linked usage events to create a novel map of the connections between disciplines, as well as analyses of potential metrics using download and citation data in novel ways (Bollen, et al., 2009). Shuai, Pepe, and Bollen (2012) show that downloads and Twitter citations interact, with Twitter likely driving traffic to new papers, and also reflecting reader interest. Uses, limitations and future research Uses Several uses of altmetrics have been proposed, which aim to capitalize on their speed, breadth, and diversity, including use in evaluation, analysis, and prediction. Evaluation The breadth of altmetrics could support more holistic evaluation efforts; a range of altmetrics may help solve the reliability problems of individual measures by triangulating scores from easily-accessible “converging partial indicators” (Martin & Irvine, 1983, p. 1). Altmetrics could also support the evaluation of increasingly important, non-traditional scholarly products like datasets and software, which are currently underrepresented in the citation record (Howison & Herbsleb, 2011; Sieber & Trumbo, 1995). Research that impacts wider audiences could also be better rewarded; Neylon (2012) relates a compelling example of how tweets reveal clinical use of a research paper—use that would otherwise go undiscovered and unrewarded. The speed of altmetrics could also be useful in evaluation, particularly for younger scholars whose research has not yet accumulated many citations. Most importantly, altmetrics could help open a window on scholars’ “scientific ‘street cred’” (Cronin, 2001, p. 6), helping reward researchers whose subtle influences—in conversations, teaching, methods expertise, and so on— influence their colleagues without perturbing the citation record. Of course, potential evaluators must be strongly cautioned that while uncritical application of any metric is dangerous, this is doubly so with altmetrics, whose research base is not yet adequate to support high-stakes decisions.", "title": "" }, { "docid": "4e791e4367b5ef9ff4259a87b919cff7", "text": "Considerable attention has been paid to dating the earliest appearance of hominins outside Africa. The earliest skeletal and artefactual evidence for the genus Homo in Asia currently comes from Dmanisi, Georgia, and is dated to approximately 1.77–1.85 million years ago (Ma)1. Two incisors that may belong to Homo erectus come from Yuanmou, south China, and are dated to 1.7 Ma2; the next-oldest evidence is an H. erectus cranium from Lantian (Gongwangling)—which has recently been dated to 1.63 Ma3—and the earliest hominin fossils from the Sangiran dome in Java, which are dated to about 1.5–1.6 Ma4. Artefacts from Majuangou III5 and Shangshazui6 in the Nihewan basin, north China, have also been dated to 1.6–1.7 Ma. Here we report an Early Pleistocene and largely continuous artefact sequence from Shangchen, which is a newly discovered Palaeolithic locality of the southern Chinese Loess Plateau, near Gongwangling in Lantian county. The site contains 17 artefact layers that extend from palaeosol S15—dated to approximately 1.26 Ma—to loess L28, which we date to about 2.12 Ma. This discovery implies that hominins left Africa earlier than indicated by the evidence from Dmanisi. An Early Pleistocene artefact assemblage from the Chinese Loess Plateau indicates that hominins had left Africa by at least 2.1 million years ago, and occupied the Loess Plateau repeatedly for a long time.", "title": "" }, { "docid": "2fc1afae973ddd832afa92d27222ef09", "text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.", "title": "" }, { "docid": "454c390fcd7d9a3d43842aee19c77708", "text": "Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.", "title": "" }, { "docid": "6e80065ade40ada9efde1f58859498bc", "text": "Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature-based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization task can be formulated as a linear least-squares problem. This methodology can be applied to both feedforward and recurrent networks, and similar techniques can be used to approximate kernel functions. Many experimental results indicate that such randomized models can reach sound performance compared to fully adaptable ones, with a number of favorable benefits, including (1) simplicity of implementation, (2) faster learning with less intervention from human beings, and (3) possibility of leveraging overall linear regression and classification algorithms (e.g., l1 norm minimization for obtaining sparse formulations). This class of neural networks attractive and valuable to the data mining community, particularly for handling large scale data mining in real-time. However, the literature in the field is extremely vast and fragmented, with many results being reintroduced multiple times under different names. This overview aims to provide a self-contained, uniform introduction to the different ways in which randomization can be applied to the design of neural networks and kernel functions. A clear exposition of the basic framework underlying all these approaches helps to clarify innovative lines of research, open problems, and most importantly, foster the exchanges of well-known results throughout different communities. © 2017 John Wiley & Sons, Ltd", "title": "" }, { "docid": "89238dd77c0bf0994b53190078eb1921", "text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.", "title": "" }, { "docid": "b306a3b20b73f537d8d9634957f0688c", "text": "In this paper, we report real-time measurement results of various contact forces exerted on a new flexible capacitive three-axis tactile sensor array based on polydimethylsiloxane (PDMS). A unit sensor consists of two thick PDMS layers with embedded copper electrodes, a spacer layer, an insulation layer and a bump layer. There are four capacitors in a unit sensor to decompose a contact force into its normal and shear components. They are separated by a wall-type spacer to improve the mechanical response time. Four capacitors are arranged in a square form. The whole sensor is an 8 × 8 array of unit sensors and each unit sensor responds to forces in all three axes. Measurement results show that the full-scale range of detectable force is around 0–20 mN (250 kPa) for all three axes. The estimated sensitivities of a unit sensor with the current setup are 1.3, 1.2 and 1.2%/mN for the x-, yand z-axes, respectively. A simple mechanical model has been established to calculate each axial force component from the measured capacitance value. Normal and shear force distribution images are captured from the fabricated sensor using a real-time measurement system. The mechanical response time of a unit sensor has been estimated to be less than 160 ms. The flexibility of the sensor has also been demonstrated by operating the sensor on a curved surface of 4 mm radius of curvature. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "41cf1b873d69f15cbc5fa25e849daa61", "text": "Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).", "title": "" }, { "docid": "ed282d88b5f329490f390372c502f238", "text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.", "title": "" }, { "docid": "dc813db85741a56d0f47044b9c2276d0", "text": "We study the complexity required for the implementation of multi-agent contracts under a variety of solution concepts. A contract is a mapping from strategy profiles to outcomes. Practical implementation of a contract requires it to be ''simple'', an illusive concept that needs to be formalized. A major source of complexity is the burden involving verifying the contract fulfillment (for example in a court of law). Contracts which specify a small number of outcomes are easier to verify and are less prone to disputes. We therefore measure the complexity of a contract by the number of outcomes it specifies. Our approach is general in the sense that all strategic interaction represented by a normal form game are allowed. The class of solution concepts we consider is rather exhaustive and includes Nash equilibrium with both pure and mixed strategies, dominant strategy implementation, iterative elimination of dominated strategies and strong equilibria.\n Some interesting insights can be gained from our analysis: Firstly, our results indicate that the complexity of implementation is independent of the size of the strategy spaces of the players but for some solution concepts grows with the number of players. Second, the complexity of {\\em unique} implementation is sometimes slightly larger, but not much larger than non-unique implementation. Finally and maybe surprisingly, for most solution concepts implementation with optimal cost usually does not require higher complexity than the complexity necessary for implementation at all.", "title": "" }, { "docid": "b0741999659724f8fa5dc1117ec86f0d", "text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.", "title": "" }, { "docid": "7fd9da6cb91385238335807348d7879e", "text": "Modeling the popularity dynamics of an online item is an important open problem in computational social science. This paper presents an in-depth study of popularity dynamics under external promotions, especially in predicting popularity jumps of online videos, and determining effective and efficient schedules to promote online content. The recently proposed Hawkes Intensity Process (HIP) models popularity as a non-linear interplay between exogenous stimuli and the endogenous reactions. Here, we propose two novel metrics based on HIP: to describe popularity gain per unit of promotion, and to quantify the time it takes for such effects to unfold. We make increasingly accurate forecasts of future popularity by including information about the intrinsic properties of the video, promotions it receives, and the non-linear effects of popularity ranking. We illustrate by simulation the interplay between the unfolding of popularity over time, and the time-sensitive value of resources. Lastly, our model lends a novel explanation of the commonly adopted periodic and constant promotion strategy in advertising, as increasing the perceived viral potential. This study provides quantitative guidelines about setting promotion schedules considering content virality, timing, and economics.", "title": "" }, { "docid": "d603e92c3f3c8ab6a235631ee3a55d52", "text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.", "title": "" }, { "docid": "7bd421d61df521c300740f4ed6789fa5", "text": "Breast cancer has become a common disease around the world. Expert systems are valuable tools that have been successful for the disease diagnosis. In this research, we accordingly develop a new knowledge-based system for classification of breast cancer disease using clustering, noise removal, and classification techniques. Expectation Maximization (EM) is used as a clustering method to cluster the data in similar groups. We then use Classification and Regression Trees (CART) to generate the fuzzy rules to be used for the classification of breast cancer disease in the knowledge-based system of fuzzy rule-based reasoning method. To overcome the multi-collinearity issue, we incorporate Principal Component Analysis (PCA) in the proposed knowledge-based system. Experimental results on Wisconsin Diagnostic Breast Cancer and Mammographic mass datasets show that proposed methods remarkably improves the prediction accuracy of breast cancer. The proposed knowledge-based system can be used as a clinical decision support system to assist medical practitioners in the healthcare practice.", "title": "" }, { "docid": "a0d4089e55a0a392a2784ae50b6fa779", "text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.", "title": "" }, { "docid": "6020b70701164e0a14b435153db1743e", "text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.", "title": "" }, { "docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1", "text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.", "title": "" } ]
scidocsrr
ccd6547fc30bdba03e4a6baf790f74fb
Cross-lingual Transfer of Semantic Role Labeling Models
[ { "docid": "b0991cd60b3e94c0ed3afede89e13f36", "text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.", "title": "" } ]
[ { "docid": "fed01e8d23759cbb007a018a3784bf9a", "text": "If you were to just glance at the spectra figures provided in the main text, using the log-determinant might seem like a reasonable thing to do. However, we note that (at least for the MNIST experiments) the largest singular values for the ‘well behaved’ runs are distinctly lower than those for the ‘poorly behaved’ ones. This suggests that the conditioning might be more pertinent than the determinant.", "title": "" }, { "docid": "14d340d91fcfcbd0bc6540891a131c0c", "text": "OBJECTIVE\nTo investigate the relationship between Facebook addiction, narcissism and self-esteem and to see if gender played any role in this equation.\n\n\nMETHODS\nThe correlational study was conducted from February to March 2013 at the Department of Psychology, University of Sargodha, Punjab, Pakistan. Using convenient sampling, two equal groups of male and female students were enrolled from different departments of the university. Bergen Facebook Addiction Scale, Hypersensitive Narcissism Scale and Rosenberg's Self-esteem Scale were used for evaluation. SPSS 17 was used for statistical analysis.\n\n\nRESULTS\nOf the 200 subjects in the study, 100(50%) each were males and females. Facebook addiction was positively correlated with narcissism(r=0.20; p<0.05) and negatively with self-esteem(r=-0.18; p<0.05). Relationship between narcissism and self-esteem was non-significant(r=0.05; p>0.05). Facebook addiction was a significant predictor of narcissistic behaviour (b=0.202; p<0.001) and low self-esteem (b=-0.18; p<0.001). There were no significant gender differences in the three variables (p>0.05 each).\n\n\nCONCLUSIONS\nFacebook addiction was a significant predictor of narcissistic behaviour and low levels of self-esteem among students.", "title": "" }, { "docid": "aeb9a3b1de003f87f6260f1cbe1e16d9", "text": "As learning environments are gaining in features and in complexity, the e-learning industry is more and more interested in features easing teachers’ work. Learning design being a critical and time consuming task could be facilitated by intelligent components helping teachers build their learning activities. The Intelligent Learning Design Recommendation System (ILD-RS) is such a software component, designed to recommend learning paths during the learning design phase in a Learning Management System (LMS). Although ILD-RS exploits several parameters which are sometimes subject to controversy, such as learning styles and teaching styles, the main interest of the component lies on its algorithm based on Markov decision processes that takes into account the teacher’s use to refine its accuracy.", "title": "" }, { "docid": "40714e8b4c58666e4044789ffe344493", "text": "The paper presents a novel calibration method for fisheye lens. Five parameters, which fully reflect characters of fisheye lens, are proposed. Linear displacement platform is used to acquire precise sliding displacement between the target image and fisheye lens. Laser calibration method is designed to obtain the precise value of optical center. A convenient method, which is used to calculate the virtual focus of the fisheye lens, is proposed. To verify the result, indoor environment is built up to measure the localization error of omni-directional robot. Image including landmarks is acquired by fisheye lens and delivered to DSP (Digital Signal Processor) to futher process. Error analysis to localization of omni-directional robot is showed in the conclusion.", "title": "" }, { "docid": "a3642ac7aff09f038df823bc2bab3b95", "text": "We assess the risk of phishing on mobile platforms. Mobile operating systems and browsers lack secure application identity indicators, so the user cannot always identify whether a link has taken her to the expected application. We conduct a systematic analysis of ways in which mobile applications and web sites link to each other. To evaluate the risk, we study 85 web sites and 100 mobile applications and discover that web sites and applications regularly ask users to type their passwords into contexts that are vulnerable to spoofing. Our implementation of sample phishing attacks on the Android and iOS platforms demonstrates that attackers can spoof legitimate applications with high accuracy, suggesting that the risk of phishing attacks on mobile platforms is greater than has previously been appreciated.", "title": "" }, { "docid": "a8ae6f14a7e308b70804e7f898c34876", "text": "Find the secret to improve the quality of life by reading this architecting dependable systems. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how wellknown the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.", "title": "" }, { "docid": "5cd5cc82b973ede163528a5755c5cc75", "text": "The wave of digital health is continuously growing and promises to transform healthcare and optimize the patients' experience. Asthma is in the center of these digital developments, as it is a chronic disease that requires the continuous attention of both health care professionals and patients themselves. The accurate and timely assessment of the state of asthma is the fundamental basis of digital health approaches and is also the most significant factor toward the preventive and efficient management of the disease. Furthermore, the necessity of inhaled medication offers a basic platform upon which modern technologies can be integrated, namely the inhaler device itself. Inhaler-based monitoring devices were introduced in the beginning of the 1980s and have been evolving but mainly for the assessment of medication adherence. As technology progresses and novel sensing components are becoming available, the enhancement of inhalers with a wider range of monitoring capabilities holds the promise to further support and optimize asthma self-management. The current article aims to take a step for the mapping of this territory and start the discussion among healthcare professionals and engineers for the identification and the development of technologies that can offer personalized asthma self-management with clinical significance. In this direction, a technical review of inhaler based monitoring devices is presented, together with an overview of their use in clinical research. The aggregated results are then summarized and discussed for the identification of key drivers that can lead the future of inhalers.", "title": "" }, { "docid": "15d2651aa06ac8276a8cc48d3399a504", "text": "Recently, the NLP community has shown a renewed interest in lexical semantics in the extent of automatic recognition of semantic relationships between pairs of words in text. Lexical semantics has become increasingly important in many natural language applications, this approach to semantics is concerned with psychological facts associated with meaning of words and how these words can be connected in semantic relations to build ontologies that provide a shared vocabulary to model a specified domain. And represent a structural framework for organizing information across fields of Artificial Intelligence (AI), Semantic Web, systems engineering and information architecture. But current systems mainly concentrate on classification of semantic relations rather than to give solutions for how these relations can be created [14]. At the same time, systems that do provide methods for creating the relations tend to ignore the context in which the conceptual relationships occur. Furthermore, methods that address semantic (non-taxonomic) relations are yet to come up with widely accepted ways of enhancing the process of classifying and extracting semantic relations. In this research we will focus on the learning of semantic relations patterns between word meanings by taking into consideration the surrounding context in the general domain. We will first generate semantic patterns in domain independent environment depending on previous specific semantic information, and a set of input examples. Our case of study will be causation relations. Then these patterns will classify causation in general domain texts taking into consideration the context of the relations, and then the classified relations will be used to learn new causation semantic patterns.", "title": "" }, { "docid": "58b5c0628b2b964aa75d65a241f028d7", "text": "This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.", "title": "" }, { "docid": "f07d9d733aee86d67aeb8a21070f7b04", "text": "Trading communication with redundant computation can increase the silicon efficiency of FPGAs and GPUs in accelerating communication-bound sparse iterative solvers. While k iterations of the iterative solver can be unrolled to provide O(k) reduction in communication cost, the extent of this unrolling depends on the underlying architecture, its memory model, and the growth in redundant computation. This paper presents a systematic procedure to select this algorithmic parameter k, which provides communication-computation tradeoff on hardware accelerators like FPGA and GPU. We provide predictive models to understand this tradeoff and show how careful selection of k can lead to performance improvement that otherwise demands significant increase in memory bandwidth. On an Nvidia C2050 GPU, we demonstrate a 1.9×-42.6× speedup over standard iterative solvers for a range of benchmarks and that this speedup is limited by the growth in redundant computation. In contrast, for FPGAs, we present an architecture-aware algorithm that limits off-chip communication but allows communication between the processing cores. This reduces redundant computation and allows large k and hence higher speedups. Our approach for FPGA provides a 0.3×-4.4× speedup over same-generation GPU devices where k is picked carefully for both architectures for a range of benchmarks.", "title": "" }, { "docid": "0508c5927df12694c665cc8c7b72d6cb", "text": "Fingerprint analysts, firearms and toolmark examiners, and forensic odontologists often rely on the uniqueness proposition in order to support their theory of identification. However, much of the literature claiming to have proven uniqueness in the forensic identification sciences is methodologically weak, and suffers flaws that negate any such conclusion being drawn. The finding of uniqueness in any study appears to be an overstatement of the significance of its results, and in several instances, this claim is made despite contrary data being presented. The mathematical and philosophical viewpoint regarding this topic is that obtaining definitive proof of uniqueness is considered impossible by modern scientific methods. More importantly, there appears to be no logical reason to pursue such research, as commentators have established that uniqueness is not the essential requirement for forming forensic conclusions. The courts have also accepted this in several recent cases in the United States, and have dismissed the concept of uniqueness as irrelevant to the more fundamental question of the reliability of the forensic analysis.", "title": "" }, { "docid": "e93c5395f350d44b59f549a29e65d75c", "text": "Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.", "title": "" }, { "docid": "aff504d1c2149d13718595fd3e745eb0", "text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.", "title": "" }, { "docid": "1dd4a95adcd4f9e7518518148c3605ac", "text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.", "title": "" }, { "docid": "277220ec963574c975fa74a864a99a5e", "text": "OBJECTIVE\n(1) To describe lacerations of the vaginal fornices, an injury known to be associated with consensual sexual intercourse, including known complications and treatment course, (2) to contrast these injuries with injuries sustained during sexual assault, and (3) to discuss the assessment of adolescent patients for sexual injuries.\n\n\nMETHODS\nWe present a case series of 4 female adolescent patients seen at a children's hospital over a period of 6 months. Each patient developed significant vaginal bleeding after sexual intercourse, and 3 of the patients presented to the emergency department with vital signs consistent with compensated shock.\n\n\nRESULTS\nEach patient was evaluated by pediatric surgery, and found to have a laceration of the vagina. Three of the patients described consensual intercourse prior to the onset of bleeding, and had lacerations of the vaginal fornices; these patients were determined to have injuries resulting from consensual sexual intercourse. The fourth patient reported sexual assault as the cause of her injuries, and was treated for longitudinal lacerations of the vaginal wall.\n\n\nCONCLUSIONS\nLacerations of the upper vagina are not frequently reported in forced vaginal intercourse, but are occasionally reported as injuries sustained during consensual coitus. In the absence of reported sexual assault, a severe vaginal fornix laceration is consistent with the diagnosis of coital injury from consensual intercourse. Diagnosis and treatment of this injury can be delayed due to the sensitive nature of these injuries. Bleeding can be profuse, leading to hemorrhagic shock, and these injuries may require transfusion of blood products and surgical repair in some cases. Complications may include hemoperitoneum, pneumoperitoneum, or retroperitoneal hematoma, even in the absence of complete vaginal perforation.\n\n\nPRACTICE IMPLICATIONS\nKnowledge of the consensual sexual injuries that may occur in adolescent patients can guide diagnosis, treatment, and counseling for the patient and her family, preventing long-term medical complications and legal consequences.", "title": "" }, { "docid": "1d0a84f55e336175fa60d3fa9eec9664", "text": "In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80% corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.", "title": "" }, { "docid": "269ff4ed7920fdadf4c74dd10a1f3533", "text": "Precision agriculture (PA) refers to a series of practices and tools necessary to correctly evaluate farming needs. The accuracy and effectiveness of PA solutions are highly dependent on accurate and timely analysis of the soil conditions. In this paper, a proof-of-concept towards an autonomous precision irrigation system is provided through the integration of a center pivot (CP) irrigation system with wireless underground sensor networks (WUSNs). This Wireless Underground Sensor-Aided Center Pivot (WUSACP) system will provide autonomous irrigation management capabilities by monitoring the soil conditions in real time using wireless underground sensors. To this end, field experiments with a hydraulic drive and continuousmove center pivot irrigation system are conducted. The results are used to evaluate empirical channel models for soil-air communications. The experiment results show that the concept of WUSA-CP is feasible. Through the design of an underground antenna, communication ranges can be improved by up to 400% compared to conventional antenna designs. The results also highlight that the wireless communication channel between soil and air is significantly affected by many spatio-temporal aspects, such as the location and burial depth of the sensors, soil texture and physical properties, soil moisture, and the vegetation canopy height. To the best of our knowledge, This work is supported in part by the National Science Foundation CAREER Award CNS-0953900, U.S. Geological Survey Award 2010NE209B, and UNL Water Center. The authors would like to thank William Rathje for his support during the experiments at Clay Center, Nebraska. Email addresses: xdong@cse.unl.edu (Xin Dong), mcvuran@cse.unl.edu (Mehmet C. Vuran), sirmak2@unl.edu (Suat Irmak) Preprint submitted to Elsevier April 4, 2012 this is the first work on the development of an autonomous precision irrigation system with WUSNs.", "title": "" }, { "docid": "086886072f3ac6908bd47822ce7398d1", "text": "This paper presents a methodology to accurately record human finger postures during grasping. The main contribution consists of a kinematic model of the human hand reconstructed via magnetic resonance imaging of one subject that (i) is fully parameterized and can be adapted to different subjects, and (ii) is amenable to in-vivo joint angle recordings via optical tracking of markers attached to the skin. The principal novelty here is the introduction of a soft-tissue artifact compensation mechanism that can be optimally calibrated in a systematic way. The high-quality data gathered are employed to study the properties of hand postural synergies in humans, for the sake of ongoing neuroscience investigations. These data are analyzed and some comparisons with similar studies are reported. After a meaningful mapping strategy has been devised, these data could be employed to define robotic hand postures suitable to attain effective grasps, or could be used as prior knowledge in lower-dimensional, real-time avatar hand animation.", "title": "" }, { "docid": "2b89021776b9c2be56a624ea401be99e", "text": "Massive open online courses (MOOCs) are now being used across the world to provide millions of learners with access to education. Many learners complete these courses successfully, or to their own satisfaction, but the high numbers who do not finish remain a subject of concern for platform providers and educators. In 2013, a team from Stanford University analysed engagement patterns on three MOOCs run on the Coursera platform. They found four distinct patterns of engagement that emerged from MOOCs based on videos and assessments. However, not all platforms take this approach to learning design. Courses on the FutureLearn platform are underpinned by a social-constructivist pedagogy, which includes discussion as an important element. In this paper, we analyse engagement patterns on four FutureLearn MOOCs and find that only two clusters identified previously apply in this case. Instead, we see seven distinct patterns of engagement: Samplers, Strong Starters, Returners, Mid-way Dropouts, Nearly There, Late Completers and Keen Completers. This suggests that patterns of engagement in these massive learning environments are influenced by decisions about pedagogy. We also make some observations about approaches to clustering in this context.", "title": "" }, { "docid": "01034189c9a4aa11bdff074e7470b3f8", "text": "We introducea methodfor predictinga controlsignalfrom anotherrelatedsignal,and applyit to voice puppetry: Generatingfull facialanimationfrom expressi ve information in anaudiotrack. Thevoicepuppetlearnsa facialcontrolmodelfrom computervision of realfacialbehavior, automaticallyincorporatingvocalandfacialdynamicssuchascoarticulation. Animation is producedby usingaudioto drive themodel,which induces a probability distribution over the manifold of possiblefacial motions. We presenta linear-time closed-formsolution for the most probabletrajectoryover this manifold. The outputis a seriesof facial control parameters, suitablefor driving many different kindsof animationrangingfrom video-realisticimagewarpsto 3D cartooncharacters. This work may not be copiedor reproducedin whole or in part for any commercialpurpose.Permissionto copy in whole or in part without paymentof fee is grantedfor nonprofiteducationaland researchpurposesprovided that all suchwhole or partial copiesincludethe following: a noticethat suchcopying is by permissionof Mitsubishi Electric InformationTechnologyCenterAmerica;an acknowledgmentof the authorsandindividual contributionsto the work; andall applicableportionsof the copyright notice. Copying, reproduction,or republishingfor any otherpurposeshall requirea licensewith paymentof feeto MitsubishiElectricInformationTechnologyCenterAmerica.All rightsreserved. Copyright c MitsubishiElectricInformationTechnologyCenterAmerica,1999 201Broadway, Cambridge,Massachusetts 02139 Publication History:– 1. 9sep98first circulated. 2. 7jan99submittedto SIGGRAPH’99", "title": "" } ]
scidocsrr
cf879400b92832fb4a469bc47457143f
Deformed Lattice Detection in Real-World Images Using Mean-Shift Belief Propagation
[ { "docid": "8d0f80611b751565311ef84d5655802c", "text": "We present a computational model for periodic pattern perception based on the mathematical theory of crystallographic groups. In each N-dimensional Euclidean space, a finite number of symmetry groups can characterize the structures of an infinite variety of periodic patterns. In 2D space, there are seven frieze groups describing monochrome patterns that repeat along one direction and 17 wallpaper groups for patterns that repeat along two linearly independent directions to tile the plane. We develop a set of computer algorithms that \"understand\" a given periodic pattern by automatically finding its underlying lattice, identifying its symmetry group, and extracting its representative motifs. We also extend this computational model for near-periodic patterns using geometric AIC. Applications of such a computational model include pattern indexing, texture synthesis, image compression, and gait analysis.", "title": "" }, { "docid": "db8325925cb9fd1ebdcf7480735f5448", "text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "title": "" } ]
[ { "docid": "277cf6fa4b5085287593ee2ca86e67fc", "text": "What can we learn of the human mind by examining its products? Here it is argued that a great deal can be learned, and that the study of human minds through its creations in the real world could be a promising field of study within the cognitive sciences. The city is a case in point. Since the beginning of cities human ideas about them have been dominated by geometric ideas, and the real history of cities has always oscillated between the geometric and the ‘organic’. Set in the context of the suggestion from cognitive neuroscience that we impose more geometric order on the world that it actually possesses, an intriguing question arises: what is the role of geometric intuition in how we understand cities and how we create them? Here we argue that all cities, the organic as well as the geometric, are pervasively ordered by geometric intuition, so that neither the forms of the cities nor their functioning can be understood without insight into their distinctive and pervasive emergent geometrical forms. The city is, as it is often said to be, the creation of economic and social processes, but, it is argued, these processes operate within an envelope of geometric possibility defined by human minds in its interaction with spatial laws that govern the relations between objects and spaces in the ambient world. Note: I have included only selected images here. All the examples will be shown fully in the presentation. Introduction: the Ideal and the Organic The most basic distinction we make about the form of cities is between the ideal and the organic. The ideal are geometric, the organic are not — or seem not to be. The geometric we define in terms of straight lines and 90 or 45 degree angles, the organic in terms of the lack of either (Fig. 1). The ideal seem to be top-down impositions of the human mind, the outcome of reason, often in association with power. We easily grasp their patterns when seen ‘all at once’. The organic we take to be the outcome of unplanned bottom up processes reflecting the", "title": "" }, { "docid": "3c444d8918a31831c2dc73985d511985", "text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.", "title": "" }, { "docid": "a06c9d681bb8a8b89a8ee64a53e3b344", "text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.", "title": "" }, { "docid": "e3cb1c3dbed312688e75baa4ee047ff8", "text": "Aggregation of amyloid-β (Aβ) by self-assembly into oligomers or amyloids is a central event in Alzheimer's disease. Coordination of transition-metal ions, mainly copper and zinc, to Aβ occurs in vivo and modulates the aggregation process. A survey of the impact of Cu(II) and Zn(II) on the aggregation of Aβ reveals some general trends: (i) Zn(II) and Cu(II) at high micromolar concentrations and/or in a large superstoichiometric ratio compared to Aβ have a tendency to promote amorphous aggregations (precipitation) over the ordered formation of fibrillar amyloids by self-assembly; (ii) metal ions affect the kinetics of Aβ aggregations, with the most significant impact on the nucleation phase; (iii) the impact is metal-specific; (iv) Cu(II) and Zn(II) affect the concentrations and/or the types of aggregation intermediates formed; (v) the binding of metal ions changes both the structure and the charge of Aβ. The decrease in the overall charge at physiological pH increases the overall driving force for aggregation but may favor more precipitation over fibrillation, whereas the induced structural changes seem more relevant for the amyloid formation.", "title": "" }, { "docid": "760edd83045a80dbb2231c0ffbef2ea7", "text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.", "title": "" }, { "docid": "2e9b98fbb1fa15020b374dbd48fb5adc", "text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.", "title": "" }, { "docid": "dd0562e604e6db2c31132f1ffcd94d4f", "text": "a r t i c l e i n f o Keywords: Data quality Utility Cost–benefit analysis Data warehouse CRM Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies. Maintaining data resources at a high quality level is a critical task in managing organizational information systems (IS). Data quality (DQ) significantly affects IS adoption and the success of data utilization [10,26]. Data quality management (DQM) has been examined from a variety of technical, functional, and organizational perspectives [22]. Achieving high quality is the primary objective of DQM efforts, and much research in DQM focuses on methodologies, tools and techniques for improving quality. Recent studies (e.g., [14,19]) have suggested that high DQ, although having clear merits, should not necessarily be the only objective to consider when assessing DQM alternatives, particularly in an IS that manages large datasets. As shown in these studies, maximizing economic benefits, based on the value gained from improving quality, and the costs involved in improving quality, may conflict with the target of achieving a high data quality level. Such findings inspire the need to link DQM decisions to economic outcomes and tradeoffs, with the goal of identifying more cost-effective DQM solutions. The quality of organizational data is rarely perfect as data, when captured and stored, may suffer from such defects as inaccuracies and missing values [22]. Its quality may further deteriorate as the real-world items that the data describes may change over time (e.g., a customer changing address, profession, and/or marital status). A plethora of studies have underscored the negative effect of low …", "title": "" }, { "docid": "100b4df0a86534cba7078f4afc247206", "text": "Presented in this article is a review of manufacturing techniques and introduction of reconfigurable manufacturing systems; a new paradigm in manufacturing which is designed for rapid adjustment of production capacity and functionality, in response to new market conditions. A definition of reconfigurable manufacturing systems is outlined and an overview of available manufacturing techniques, their key drivers and enablers, and their impacts, achievements and limitations is presented. A historical review of manufacturing from the point-of-view of the major developments in the market, technology and sciences issues affecting manufacturing is provided. The new requirements for manufacturing are discussed and characteristics of reconfigurable manufacturing systems and their key role in future manufacturing are explained. The paper is concluded with a brief review of specific technologies and research issues related to RMSs.", "title": "" }, { "docid": "c6e04f33af1c82dffd2cb1b42dd4ac42", "text": "This paper is devoted to the study of discrete fractional calculus; the particular goal is to define and solve well-defined discrete fractional difference equations. For this purpose we first carefully develop the commutativity properties of the fractional sum and the fractional difference operators. Then a ν-th (0 < ν ≤ 1) order fractional difference equation is defined. A nonlinear problem with an initial condition is solved and the corresponding linear problem with constant coefficients is solved as an example. Further, the half-order linear problem with constant coefficients is solved with a method of undetermined coefficients and with a transform method.", "title": "" }, { "docid": "07992258f0d27693bf62689e85850230", "text": "BACKGROUND\nWiener-Granger causality (\"G-causality\") is a statistical notion of causality applicable to time series data, whereby cause precedes, and helps predict, effect. It is defined in both time and frequency domains, and allows for the conditioning out of common causal influences. Originally developed in the context of econometric theory, it has since achieved broad application in the neurosciences and beyond. Prediction in the G-causality formalism is based on VAR (vector autoregressive) modelling.\n\n\nNEW METHOD\nThe MVGC Matlab© Toolbox approach to G-causal inference is based on multiple equivalent representations of a VAR model by (i) regression parameters, (ii) the autocovariance sequence and (iii) the cross-power spectral density of the underlying process. It features a variety of algorithms for moving between these representations, enabling selection of the most suitable algorithms with regard to computational efficiency and numerical accuracy.\n\n\nRESULTS\nIn this paper we explain the theoretical basis, computational strategy and application to empirical G-causal inference of the MVGC Toolbox. We also show via numerical simulations the advantages of our Toolbox over previous methods in terms of computational accuracy and statistical inference.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nThe standard method of computing G-causality involves estimation of parameters for both a full and a nested (reduced) VAR model. The MVGC approach, by contrast, avoids explicit estimation of the reduced model, thus eliminating a source of estimation error and improving statistical power, and in addition facilitates fast and accurate estimation of the computationally awkward case of conditional G-causality in the frequency domain.\n\n\nCONCLUSIONS\nThe MVGC Toolbox implements a flexible, powerful and efficient approach to G-causal inference.", "title": "" }, { "docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca", "text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.", "title": "" }, { "docid": "40c9250b3fb527425138bc41acf8fd4e", "text": "Noise pollution is a major problem in cities around the world. The current methods to assess it neglect to represent the real exposure experienced by the citizens themselves, and therefore could lead to wrong conclusions and a biased representations. In this paper we present a novel approach to monitor noise pollution involving the general public. Using their mobile phones as noise sensors, we provide a low cost solution for the citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. Our prototype, called NoiseTube, can be found online [1].", "title": "" }, { "docid": "62501a588824f70daaf4c2dbc49223da", "text": "ORB-SLAM2 is one of the better-known open source SLAM implementations available. However, the dependence of visual features causes it to fail in featureless environments. With the present work, we propose a new technique to improve visual odometry results given by ORB-SLAM2 using a tightly Sensor Fusion approach to integrate camera and odometer data. In this work, we use odometer readings to improve the tracking results by adding graph constraints between frames and introduce a new method for preventing the tracking loss. We test our method using three different datasets, and show an improvement in the estimated trajectory, allowing a continuous tracking without losses.", "title": "" }, { "docid": "ca58a73d73f4174367cdee6b5269379c", "text": "Data noising is an effective technique for regularizing neural network models. While noising is widely adopted in application domains such as vision and speech, commonly used noising primitives have not been developed for discrete sequencelevel settings such as language modeling. In this paper, we derive a connection between input noising in neural network language models and smoothing in ngram models. Using this connection, we draw upon ideas from smoothing to develop effective noising schemes. We demonstrate performance gains when applying the proposed schemes to language modeling and machine translation. Finally, we provide empirical analysis validating the relationship between noising and smoothing.", "title": "" }, { "docid": "60be5aa3a7984f0e057d92ae74fae916", "text": "Reading requires the interaction between multiple cognitive processes situated in distant brain areas. This makes the study of functional brain connectivity highly relevant for understanding developmental dyslexia. We used seed-voxel correlation mapping to analyse connectivity in a left-hemispheric network for task-based and resting-state fMRI data. Our main finding was reduced connectivity in dyslexic readers between left posterior temporal areas (fusiform, inferior temporal, middle temporal, superior temporal) and the left inferior frontal gyrus. Reduced connectivity in these networks was consistently present for 2 reading-related tasks and for the resting state, showing a permanent disruption which is also present in the absence of explicit task demands and potential group differences in performance. Furthermore, we found that connectivity between multiple reading-related areas and areas of the default mode network, in particular the precuneus, was stronger in dyslexic compared with nonimpaired readers.", "title": "" }, { "docid": "17935656561dd7af52b8aa2bf9d0fbf8", "text": "In this paper, we present a new method to build public-key Cryptosystem. The method is based on the state explosion problem occurred in the computing of average number of tokens in the places of Stochastic Petri Net (SPN). The reachable markings in the coverability tree of SPN are used as the encryption keys. Accordingly, multiple encryption keys can be generated, thus we can perform multiple encryption to get as strong security as we expect. The decryption is realized through solving a group of ordinary differential equations from Continuous Petri Net (CPN), which has the same underlying Petri net as that of SPN. The decipherment difficulty for attackers is in exponential order. The contribution of this paper is that we can use continuous mathematics to design cryptosystems besides discrete mathematics.", "title": "" }, { "docid": "f9fbbbebde6feede5cc34afb60f854cd", "text": "Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix.\n To validate our approach, we implemented Fixation, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, Fixation returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated Fixation against them: Fixation successfully detected the extracted bad fixes.", "title": "" }, { "docid": "753e17c4e44019b110e639d82d576e15", "text": "Recent advances in algorithms and graphics hardware have opened the possibility to render large terrain fields at interactive rates on commodity PCs. Due to these advances it is possible today to interactively synthesize artificial terrains using procedural descriptions. Our paper extends on this work by presenting a new GPU method for real-time editing, synthesis, and rendering of infinite landscapes exhibiting a wide range of geological structures. Our method builds upon the concept of projected grids to achieve near-optimal sampling of the landscape. We describe the integration of procedural shaders for multifractals into this approach, and we propose intuitive options to edit the shape of the resulting terrain. The method is multi-scale and adaptive in nature, and it has been extended towards infinite and spherical domains. In combination with geo-typical textures that automatically adapt to the shape being synthesized, a powerful method for the creation and rendering of realistic landscapes is presented.", "title": "" }, { "docid": "e6b9a05ecc3fd48df50aa769ce05b6a6", "text": "This paper presents an interactive exoskeleton device for hand rehabilitation, iHandRehab, which aims to satisfy the essential requirements for both active and passive rehabilitation motions. iHandRehab is comprised of exoskeletons for the thumb and index finger. These exoskeletons are driven by distant actuation modules through a cable/sheath transmission mechanism. The exoskeleton for each finger has 4 degrees of freedom (DOF), providing independent control for all finger joints. The joint motion is accomplished by a parallelogram mechanism so that the joints of the device and their corresponding finger joints have the same angular displacement when they rotate. Thanks to this design, the joint angles can be measured by sensors real time and high level motion control is therefore made very simple without the need of complicated kinematics. The paper also discusses important issues when the device is used by different patients, including its adjustable joint range of motion (ROM) and adjustable range of phalanx length (ROPL). Experimentally collected data show that the achieved ROM is close to that of a healthy hand and the ROPL covers the size of a typical hand, satisfying the size need of regular hand rehabilitation. In order to evaluate the performance when it works as a haptic device in active mode, the equivalent moment of inertia (MOI) of the device is calculated. The results prove that the device has low inertia which is critical in order to obtain good backdrivability. Experimental analysis shows that the influence of friction accounts for a large portion of the driving torque and warrants future investigation.", "title": "" }, { "docid": "b1c00b7801a51d11a8384e5977d7e041", "text": "In this article, we report the results of 2 studies that were conducted to investigate whether adult attachment theory explains employee behavior at work. In the first study, we examined the structure of a measure of adult attachment and its relations with measures of trait affectivity and the Big Five. In the second study, we examined the relations between dimensions of attachment and emotion regulation behaviors, turnover intentions, and supervisory reports of counterproductive work behavior and organizational citizenship behavior. Results showed that anxiety and avoidance represent 2 higher order dimensions of attachment that predicted these criteria (except for counterproductive work behavior) after controlling for individual difference variables and organizational commitment. The implications of these results for the study of attachment at work are discussed.", "title": "" } ]
scidocsrr
0f58724c0c6bc801bf7bcfc0fe5698c4
Automatic projector calibration with embedded light sensors
[ { "docid": "0c5dbac11af955a8261a4f3b8b5fe908", "text": "We describe a calibration and rendering technique for a projector that can render rectangular images under keystoned position. The projector utilizes a rigidly attached camera to form a stereo pair. We describe a very easy to use technique for calibration of the projector-camera pair using only black planar surfaces. We present an efficient rendering method to pre-warp images so that they appear correctly on the screen, and show experimental results.", "title": "" } ]
[ { "docid": "bd1c93dfc02d90ad2a0c7343236342a7", "text": "Osteochondritis dissecans (OCD) of the capitellum is an uncommon disorder seen primarily in the adolescent overhead athlete. Unlike Panner disease, a self-limiting condition of the immature capitellum, OCD is multifactorial and likely results from microtrauma in the setting of cartilage mismatch and vascular susceptibility. The natural history of OCD is poorly understood, and degenerative joint disease may develop over time. Multiple modalities aid in diagnosis, including radiography, MRI, and magnetic resonance arthrography. Lesion size, location, and grade determine management, which should attempt to address subchondral bone loss and articular cartilage damage. Early, stable lesions are managed with rest. Surgery should be considered for unstable lesions. Most investigators advocate arthroscopic débridement with marrow stimulation. Fragment fixation and bone grafting also have provided good short-term results, but concerns persist regarding the healing potential of advanced lesions. Osteochondral autograft transplantation appears to be promising and should be reserved for larger, higher grade lesions. Clinical outcomes and return to sport are variable. Longer-term follow-up studies are necessary to fully assess surgical management, and patients must be counseled appropriately.", "title": "" }, { "docid": "a1118a6310736fc36dbc70bd25bd5f28", "text": "Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.", "title": "" }, { "docid": "100c2517fd0d01242ca34a124ef4e694", "text": "Recently, the pervasiveness of street cameras for security and traffic monitoring opens new challenges to the computer vision technology to provide reliable monitoring schemes. These monitoring schemes require the basic processes of detecting and tracking objects, such as vehicles. However, object detection performance often suffers under occlusion. This work proposes a vehicle occlusion handling improvement of an existing traffic video monitoring system, which was later integrated. Two scenarios were considered in occlusion: indistinct and distinct - wherein the occluded vehicles have similar and dissimilar colors, respectively. K-means clustering using the HSV color space was used for distinct occlusion while sliding window algorithm was used for indistinct occlusion. The proposed method also applies deep convolutional neural networks to further improve vehicle recognition and classification. The CNN model obtained a 97.21% training accuracy and a 98.27% testing accuracy. Moreover, it minimizes the effect of occlusion to vehicle detection and classification. It also identifies common vehicle types (bus, truck, van, sedan, SUV, jeepney, and motorcycle) rather than classifying these as small, medium and large vehicles, which were the previous categories. Despite the implementation and results, it is recommended to improve the occlusion handling issue. The disadvantage of the sliding window algorithm is that it requires a lot of memory and is time-consuming. In case of deploying this research for more substantial purposes and intentions, it is ideal to enhance the CNN model by training it with more varied images of vehicles and to implement the system real-time. The results of this work can serve as a contribution for future works that are significant to traffic monitoring and air quality surveillance.", "title": "" }, { "docid": "85016bc639027363932f9adf7012d7a7", "text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.", "title": "" }, { "docid": "f1deb9134639fb8407d27a350be5b154", "text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "title": "" }, { "docid": "dee5489accb832615f63623bc445212f", "text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.", "title": "" }, { "docid": "dd62fd669d40571cc11d64789314dba1", "text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.", "title": "" }, { "docid": "5b43ea9e56c81e98c52b4041b0c32fdf", "text": "A novel broadband probe-type waveguide-to-microstrip transition adapted for operation in V band is presented. The transition is realized on a standard high frequency printed circuit board (PCB) fixed between a standard WR-15 waveguide and a simple backshort. The microstrip-fed probe is placed at the same side of the PCB with the backshort and acts as an impedance matching element. The proposed transition additionally includes two through holes implemented on the PCB in the center of the transition area. Thus, significant part of the lossy PCB dielectric is removed from that area providing wideband and low-loss performance of the transition. Measurements show that the designed transition has the bandwidth of 50–70 GHz for the −10 dB level of the reflection coefficient with the loss level of only 0.75 dB within the transition bandwidth.", "title": "" }, { "docid": "2b4caf3ecdcd78ac57d8acd5788084d2", "text": "In the age of information network explosion, Along with the popularity of the Internet, users can link to all kinds of social networking sites anytime and anywhere to interact and discuss with others. This phenomenon indicates that social networking sites have become a platform for interactions between companies and customers so far. Therefore, with the above through social science and technology development trend arising from current social phenomenon, research of this paper, mainly expectations for analysis by the information of interaction between people on the social network, such as: user clicked fans pages, user's graffiti wall message information, friend clicked fans pages etc. Three kinds of personal information for personal preference analysis, and from this huge amount of personal data to find out corresponding diverse group for personal preference category. We can by personal preference information for diversify personal advertising, product recommendation and other services. The paper at last through the actual business verification, the research can improve website browsing pages growth 11%, time on site growth 15%, site bounce rate dropped 13.8%, product click through rate growth 43%, more fully represents the results of this research fit the use's preference.", "title": "" }, { "docid": "93a03403b2e44cddccfbe4e6b6e9d0ef", "text": "Safety and security are two key properties of Cyber-Physical Systems (CPS). Safety is aimed at protecting the systems from accidental failures in order to avoid hazards, while security is focused on protecting the systems from intentional attacks. They share identical goals – protecting CPS from failing. When aligned within a CPS, safety and security work well together in providing a solid foundation of an invincible CPS, while weak alignment may produce inefficient development and partially-protected systems. The need of such alignment has been recognized by the research community, the industry, as well as the International Society of Automation (ISA), which identified a need of alignment between safety and security standards ISA84 (IEC 61511) and ISA99 (IEC 62443). We propose an approach for aligning CPS safety and security at early development phases by synchronizing safety and security lifecycles based on ISA84 and ISA99 standards. The alignment is achieved by merging safety and security lifecycle phases, and developing an unified model – Failure-Attack-CounTermeasure (FACT) Graph. The FACT graph incorporates safety artefacts (fault trees and safety countermeasures) and security artefacts (attack trees and security countermeasures), and can be used during safety and security alignment analysis, as well as in later CPS development and operation phases, such as verification, validation, monitoring, and periodic safety and security assessment.", "title": "" }, { "docid": "6d594c21ff1632b780b510620484eb62", "text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.", "title": "" }, { "docid": "8240df0c9498482522ef86b4b1e924ab", "text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.", "title": "" }, { "docid": "c95c46d75c2ff3c783437100ba06b366", "text": "Co-references are traditionally used when integrating data from different datasets. This approach has various benefits such as fault tolerance, ease of integration and traceability of provenance; however, it often results in the problem of entity consolidation, i.e., of objectively stating whether all the co-references do really refer to the same entity; and, when this is the case, whether they all convey the same intended meaning. Relying on the sole presence of a single equivalence (owl:sameAs) statement is often problematic and sometimes may even cause serious troubles. It has been observed that to indicate the likelihood of an equivalence one could use a numerically weighted measure, but the real hard questions of where precisely will these values come from arises. To answer this question we propose a methodology based on a graph clustering algorithm.", "title": "" }, { "docid": "c05d94b354b1d3a024a87e64d06245f1", "text": "This paper outlines an innovative game model for learning computational thinking (CT) skills through digital game-play. We have designed a game framework where students can practice and develop their skills in CT with little or no programming knowledge. We analyze how this game supports various CT concepts and how these concepts can be mapped to programming constructs to facilitate learning introductory computer programming. Moreover, we discuss the potential benefits of our approach as a support tool to foster student motivation and abilities in problem solving. As initial evaluation, we provide some analysis of feedback from a survey response group of 25 students who have played our game as a voluntary exercise. Structured empirical evaluation will follow, and the plan for that is briefly described.", "title": "" }, { "docid": "46adb4d23404c7f404ede6656ec8712f", "text": "Over the past decades, the importance of multimedia services such as video streaming has increased considerably. HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for adaptive video streaming services. In HAS, a video is split into multiple segments and encoded at multiple quality levels. State-of-the-art HAS clients employ deterministic heuristics to dynamically adapt the requested quality level based on the perceived network and device conditions. Current HAS client heuristics are however hardwired to fit specific network configurations, making them less flexible to fit a vast range of settings. In this article, an adaptive Q-Learning-based HAS client is proposed. In contrast to existing heuristics, the proposed HAS client dynamically learns the optimal behavior corresponding to the current network environment. Considering multiple aspects of video quality, a tunable reward function has been constructed, giving the opportunity to focus on different aspects of the Quality of Experience, the quality as perceived by the end-user. The proposed HAS client has been thoroughly evaluated using a network-based simulator, investigating multiple reward configurations and Reinforcement Learning specific settings. The evaluations show that the proposed client can outperform standard HAS in the evaluated networking environments.", "title": "" }, { "docid": "588f49731321da292235ca0f36f04465", "text": "Taylor and Francis Ltd CUS103837.sgm 10.1080/00220270500038545 Journal of Curri ulum Studies 0 00-0 00 (p i t)/0 -0000 (online) Original Article 2 5 & Francis Group Ltd 002 5 Hele Timperley School f Edu a onUniversity of AucklandPrivate Bag 92019AucklandNew Zealandh.timperley@auckland.ac.nz Hopes that the transformation of schools lies with exceptional leaders have proved both unrealistic and unsustainable. The idea of leadership as distributed across multiple people and situations has proven to be a more useful framework for understanding the realities of schools and how they might be improved. However, empirical work on how leadership is distributed within more and less successful schools is rare. This paper presents key concepts related to distributed leadership and illustrates them with an empirical study in a schoolimprovement context in which varying success was evident. Grounding the theory in this practice-context led to the identification of some risks and benefits of distributing leadership and to a challenge of some key concepts presented in earlier theorizing about leadership and its distribution.", "title": "" }, { "docid": "0bc40c2f559a8daa37fbf2026db2f411", "text": "A novel algorithm for calculating the QR decomposition (QRD) of polynomial matrix is proposed. The algorithm operates by applying a series of polynomial Givens rotations to transform a polynomial matrix into an upper-triangular polynomial matrix and, therefore, amounts to a generalisation of the conventional Givens method for formulating the QRD of a scalar matrix. A simple example is given to demonstrate the algorithm, but also illustrates two clear advantages of this algorithm when compared to an existing method for formulating the decomposition. Firstly, it does not demonstrate the same unstable behaviour that is sometimes observed with the existing algorithm and secondly, it typically requires less iterations to converge. The potential application of the decomposition is highlighted in terms of broadband multi-input multi-output (MIMO) channel equalisation.", "title": "" }, { "docid": "a709d8ad8d8dd2226a90e0a60a5c36de", "text": "Intermediate online targeted advertising (IOTA) is a new business model for online targeted advertising. Posting the right banner advertisement to the right web user at the right time is what advertisements allocation does in IOTA business model. This research uses probability theory to build a theoretical model based on Bayesian network to optimize advertisements allocation. The Bayesian network model allows us to calculate the probability that Web user will click the banner based on historical data. And these can help us to make optimal decision in advertisements allocation. Data availability is also be discussed in this paper. An experiment base on practical data is run to verify the feasibility of the Bayesian network model.", "title": "" }, { "docid": "8b3431783f1dc699be1153ad80348d3e", "text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”", "title": "" } ]
scidocsrr
bb7d7a006b01c38d5d7ef8f463592690
The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors
[ { "docid": "8010b3fdc1c223202157419c4f61bacf", "text": "Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.", "title": "" } ]
[ { "docid": "bbb592c079f1cb2248ded2e249dcc943", "text": "A family of super deep networks, referred to as residual networks or ResNet [14], achieved record-beating performance in various visual tasks such as image recognition, object detection, and semantic segmentation. The ability to train very deep networks naturally pushed the researchers to use enormous resources to achieve the best performance. Consequently, in many applications super deep residual networks were employed for just a marginal improvement in performance. In this paper, we propose ∊-ResNet that allows us to automatically discard redundant layers, which produces responses that are smaller than a threshold ∊, without any loss in performance. The ∊-ResNet architecture can be achieved using a few additional rectified linear units in the original ResNet. Our method does not use any additional variables nor numerous trials like other hyperparameter optimization techniques. The layer selection is achieved using a single training process and the evaluation is performed on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. In some instances, we achieve about 80% reduction in the number of parameters.", "title": "" }, { "docid": "e6922a113d619784bd902c06863b5eeb", "text": "Brake Analysis and NVH (Noise, Vibration and Harshness) Optimization have become critically important areas of application in the Automotive Industry. Brake Noise and Vibration costs approximately $1Billion/year in warranty work in Detroit alone. NVH optimization is now increasingly being used to predict the vehicle tactile and acoustic responses in relation to the established targets for design considerations. Structural optimization coupled with frequency response analysis is instrumental in driving the design process so that the design targets are met in a timely fashion. Usual design targets include minimization of vehicle weight, the adjustment of fundamental eigenmodes and the minimization of acoustic pressure or vibration at selected vehicle locations. Both, Brake Analysis and NVH Optimization are computationally expensive analyses involving eigenvalue calculations. From a computational sense and the viewpoint of MSC.Nastran, brake analysis exercises the CEAD (Complex Eigenvalue Analysis Dmap) module, while NVH optimization invokes the DSADJ (Design Sensitivity using ADJoint method DMAP) module. In this paper, two automotive applications are presented to demonstrate the performance improvements of the CEAD and DSADJ modules on NEC vector-parallel supercomputers. Dramatic improvements in the DSADJ module resulting in approx. 8-9 fold performance improvement as compared to MSC.Nastran V70 were observed for NVH optimization. Also, brake simulations and experiences at General Motors will be presented. This analysis method has been successfully applied to 4 different programs at GM and the simulation results were consistent with laboratory experiments on test vehicles.", "title": "" }, { "docid": "4eb205978a12b780dc26909bee0eebaa", "text": "This paper introduces CPE, the CIRCE Plugin for Eclipse. The CPE adds to the open-source development environment Eclipse the ability of writing and analysing software requirements written in natural language. Models of the software described by the requirements can be examined on-line during the requirements writing process. Initial UML models and skeleton Java code can be generated from the requirements, and imported into Eclipse for further editing and analysis.", "title": "" }, { "docid": "632fd895e8920cd9b25b79c9d4bd4ef4", "text": "In minimally invasive surgery, instruments are inserted from the exterior of the patient’s body into the surgical field inside the body through the minimum incision, resulting in limited visibility, accessibility, and dexterity. To address this problem, surgical instruments with articulated joints and multiple degrees of freedom have been developed. The articulations in currently available surgical instruments use mainly wire or link mechanisms. These mechanisms are generally robust and reliable, but the miniaturization of the mechanical parts required often results in problems with size, weight, durability, mechanical play, sterilization, and assembly costs. We thus introduced a compliant mechanism to a laparoscopic surgical instrument with multiple degrees of freedom at the tip. To show the feasibility of the concept, we developed a prototype with two degrees of freedom articulated surgical instruments that can perform the grasping and bending movements. The developed prototype is roughly the same size of the conventional laparoscopic instrument, within the diameter of 4 mm. The elastic parts were fabricated by Ni-Ti alloy and SK-85M, rigid parts ware fabricated by stainless steel, covered by 3D- printed ABS resin. The prototype was designed using iterative finite element method analysis, and has a minimal number of mechanical parts. The prototype showed hysteresis in grasping movement presumably due to the friction; however, the prototype showed promising mechanical characteristics and was fully functional in two degrees of freedom. In addition, the prototype was capable to exert over 15 N grasping that is sufficient for the general laparoscopic procedure. The evaluation tests thus positively showed the concept of the proposed mechanism. The prototype showed promising characteristics in the given mechanical evaluation experiments. Use of a compliant mechanism such as in our prototype may contribute to the advancement of surgical instruments in terms of simplicity, size, weight, dexterity, and affordability.", "title": "" }, { "docid": "309dee96492cf45ed2887701b27ad3ee", "text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.", "title": "" }, { "docid": "79eb0a39106679e80bd1d1edcd100d4d", "text": "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.", "title": "" }, { "docid": "bb93778655c0bfa525d9539f8f720da6", "text": "Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS", "title": "" }, { "docid": "54b094c7747c8ac0b1fbd1f93e78fd8e", "text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.", "title": "" }, { "docid": "76f66971abcce88b670940c8cc237cfc", "text": "A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.", "title": "" }, { "docid": "4e23da50d4f1f0c4ecdbbf5952290c98", "text": "[Context and motivation] User stories are an increasingly popular textual notation to capture requirements in agile software development. [Question/Problem] To date there is no scientific evidence on the effectiveness of user stories. The goal of this paper is to explore how practicioners perceive this artifact in the context of requirements engineering. [Principal ideas/results] We explore perceived effectiveness of user stories by reporting on a survey with 182 responses from practitioners and 21 follow-up semi-structured interviews. The data shows that practitioners agree that using user stories, a user story template and quality guidelines such as the INVEST mnemonic improve their productivity and the quality of their work deliverables. [Contribution] By combining the survey data with 21 semi-structured follow-up interviews, we present 12 findings on the usage and perception of user stories by practitioners that employ user stories in their everyday work environment.", "title": "" }, { "docid": "a39d7490a353f845da616a06eedbb211", "text": "The explosive growth in online information is making it harder for large, globally distributed organizations to foster collaboration and leverage their intellectual assets. Recently, there has been a growing interest in the development of next generation knowledge management systems focussing on the artificial intelligence based technologies. We propose a generic knowledge management system architecture based on ADIPS (agent-based distributed information processing system) framework. This contributes to the stream of research on intelligent KM system to supports the creation, acquisition, management, and sharing of information that is widely distributed over a network system. It will benefit the users through the automatic provision of timely and relevant information with minimal effort to search for that information. Ontologies which stand out as a keystone of new generation of multiagent information systems, are used for the purpose of structuring the resources. This framework provides personalized information delivery, identifies items of interest to user proactively and enables unwavering management of distributed intellectual assets.", "title": "" }, { "docid": "462ab6cc559053625e7447994b9c4f43", "text": "The relationship of cortical structure and specific neuronal circuitry to global brain function, particularly its perturbations related to the development and progression of neuropathology, is an area of great interest in neurobehavioral science. Disruption of these neural networks can be associated with a wide range of neurological and neuropsychiatric disorders. Herein we review activity of the Default Mode Network (DMN) in neurological and neuropsychiatric disorders, including Alzheimer's disease, Parkinson's disease, Epilepsy (Temporal Lobe Epilepsy - TLE), attention deficit hyperactivity disorder (ADHD), and mood disorders. We discuss the implications of DMN disruptions and their relationship to the neurocognitive model of each disease entity, the utility of DMN assessment in clinical evaluation, and the changes of the DMN following treatment.", "title": "" }, { "docid": "7321e113293a7198bf88a1744a7ca6c9", "text": "It is widely claimed that research to discover and develop new pharmaceuticals entails high costs and high risks. High research and development (R&D) costs influence many decisions and policy discussions about how to reduce global health disparities, how much companies can afford to discount prices for lowerand middle-income countries, and how to design innovative incentives to advance research on diseases of the poor. High estimated costs also affect strategies for getting new medicines to the world’s poor, such as the advanced market commitment, which built high estimates into its inflated size and prices. This article takes apart the most detailed and authoritative study of R&D costs in order to show how high estimates have been constructed by industry-supported economists, and to show how much lower actual costs may be. Besides serving as an object lesson in the construction of ‘facts’, this analysis provides reason to believe that R&D costs need not be such an insuperable obstacle to the development of better medicines. The deeper problem is that current incentives reward companies to develop mainly new medicines of little advantage and compete for market share at high prices, rather than to develop clinically superior medicines with public funding so that prices could be much lower and risks to companies lower as well. BioSocieties advance online publication, 7 February 2011; doi:10.1057/biosoc.2010.40", "title": "" }, { "docid": "28f61d005f1b53ad532992e30b9b9b71", "text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.", "title": "" }, { "docid": "22c85072db1f5b5a51b69fcabf01eb5e", "text": "Websites’ and mobile apps’ privacy policies, written in natural language, tend to be long and difficult to understand. Information privacy revolves around the fundamental principle of notice and choice, namely the idea that users should be able to make informed decisions about what information about them can be collected and how it can be used. Internet users want control over their privacy, but their choices are often hidden in long and convoluted privacy policy documents. Moreover, little (if any) prior work has been done to detect the provision of choices in text. We address this challenge of enabling user choice by automatically identifying and extracting pertinent choice language in privacy policies. In particular, we present a two-stage architecture of classification models to identify opt-out choices in privacy policy text, labelling common varieties of choices with a mean F1 score of 0.735. Our techniques enable the creation of systems to help Internet users to learn about their choices, thereby effectuating notice and choice and improving Internet privacy.", "title": "" }, { "docid": "7fd21ee95850fec1f1e00b766eebbc06", "text": "HP’s StoreAll with Express Query is a scalable commercial file archiving product that offers sophisticated file metadata management and search capabilities [3]. A new REST API enables fast, efficient searching to find all files that meet a given set of metadata criteria and the ability to tag files with custom metadata fields. The product brings together two significant systems: a scale out file system and a metadata database based on LazyBase [10]. In designing and building the combined product, we identified several real-world issues in using a pipelined database system in a distributed environment, and overcame several interesting design challenges that were not contemplated by the original research prototype. This paper highlights our experiences.", "title": "" }, { "docid": "2cd8c6284e802d810084dd85f55b8fca", "text": "Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.", "title": "" }, { "docid": "420e6237516e111b7db525ac61d829bc", "text": "The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrowbandwidth, highly constrained interface [23]. To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-tocomputer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye movement interfaces and virtual environments.", "title": "" }, { "docid": "854f26f24986e729be06962952f9eaa2", "text": "This paper illustrates the result of land use/cover change in Dhaka Metropolitan of Bangladesh using topographic maps and multi-temporal remotely sensed data from 1960 to 2005. The Maximum likelihood supervised classification technique was used to extract information from satellite data, and post-classification change detection method was employed to detect and monitor land use/cover change. Derived land use/cover maps were further validated by using high resolution images such as SPOT, IRS, IKONOS and field data. The overall accuracy of land cover change maps, generated from Landsat and IRS-1D data, ranged from 85% to 90%. The analysis indicated that the urban expansion of Dhaka Metropolitan resulted in the considerable reduction of wetlands, cultivated land, vegetation and water bodies. The maps showed that between 1960 and 2005 built-up areas increased approximately 15,924 ha, while agricultural land decreased 7,614 ha, vegetation decreased 2,336 ha, wetland/lowland decreased 6,385 ha, and water bodies decreased about 864 ha. The amount of urban land increased from 11% (in 1960) to 344% in 2005. Similarly, the growth of landfill/bare soils category was about 256% in the same period. Much of the city's rapid growth in population has been accommodated in informal settlements with little attempt being made to limit the risk of environmental impairments. The study quantified the patterns of land use/cover change for the last 45 years for Dhaka Metropolitan that forms valuable resources for urban planners and decision makers to devise sustainable land use and environmental planning.", "title": "" }, { "docid": "d9176322068e6ca207ae913b1164b3da", "text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.", "title": "" } ]
scidocsrr
350c240635f2e66163d25077c15771f0
Adaptive Information Extraction from Text by Rule Induction and Generalisation
[ { "docid": "7f65d625ca8f637a6e2e9cb7006d1778", "text": "Recent work in machine learning for information extraction has focused on two distinct sub-problems: the conventional problem of filling template slots from natural language text, and the problem of wrapper induction, learning simple extraction procedures (“wrappers”) for highly structured text such as Web pages produced by CGI scripts. For suitably regular domains, existing wrapper induction algorithms can efficiently learn wrappers that are simple and highly accurate, but the regularity bias of these algorithms makes them unsuitable for most conventional information extraction tasks. Boosting is a technique for improving the performance of a simple machine learning algorithm by repeatedly applying it to the training set with different example weightings. We describe an algorithm that learns simple, low-coverage wrapper-like extraction patterns, which we then apply to conventional information extraction problems using boosting. The result is BWI, a trainable information extraction system with a strong precision bias and F1 performance better than state-of-the-art techniques in many domains.", "title": "" } ]
[ { "docid": "0574f193736e10b13a22da2d9c0ee39a", "text": "Preliminary communication In food production industry, forecasting the timing of demands is crucial in planning production scheduling to satisfy customer needs on time. In the literature, several statistical models have been used in demand forecasting in Food and Beverage (F&B) industry and the choice of the most suitable forecasting model remains a central concern. In this context, this article aims to compare the performances between Trend Analysis, Decomposition and Holt-Winters (HW) models for the prediction of a time series formed by a group of jam and sherbet product demands. Data comprised the series of monthly sales from January 2013 to December 2014 obtained from a private company. As performance measures, metric analysis of the Mean Absolute Percentage Error (MAPE) is used. In this study, the HW and Decomposition models obtained better results regarding the performance metrics.", "title": "" }, { "docid": "fb83d0d3ea08cc1e21a1a22ba810dca0", "text": "conditions. The effect of the milk run logistics on the reduction of CO2 is also discussed. The promotion of Milk-Run logistics can be highly evaluated from the viewpoint of environmental policy.", "title": "" }, { "docid": "bfd834ddda77706264fa458302549325", "text": "Deep learning has emerged as a new methodology with continuous interests in artificial intelligence, and it can be applied in various business fields for better performance. In fashion business, deep learning, especially Convolutional Neural Network (CNN), is used in classification of apparel image. However, apparel classification can be difficult due to various apparel categories and lack of labeled image data for each category. Therefore, we propose to pre-train the GoogLeNet architecture on ImageNet dataset and fine-tune on our fine-grained fashion dataset based on design attributes. This will complement the small size of dataset and reduce the training time. After 10-fold experiments, the average final test accuracy results 62%.", "title": "" }, { "docid": "919ee3a62e28c1915d0be556a2723688", "text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.", "title": "" }, { "docid": "d506267b7b3eed0227d7c5e14b095223", "text": "Analytic tools are beginning to be largely employed, given their ability to rank, e.g., the visibility of social media users. Visibility that, in turns, can have a monetary value, since social media popular people usually either anticipate or establish trends that could impact the real world (at least, from a consumer point of view). The above rationale has fostered the flourishing of private companies providing statistical results for social media analysis. These results have been accepted, and largely diffused, by media without any apparent scrutiny, while Academia has moderately focused its attention on this phenomenon. In this paper, we provide evidence that analytic results provided by field-flagship companies are questionable (at least). In particular, we focus on Twitter and its \"fake followers\". We survey popular Twitter analytics that count the fake followers of some target account. We perform a series of experiments aimed at verifying the trustworthiness of their results. We compare the results of such tools with a machine-learning classifier whose methodology bases on scientific basis and on a sound sampling scheme. The findings of this work call for a serious re-thinking of the methodology currently used by companies providing analytic results, whose present deliveries seem to lack on any reliability.", "title": "" }, { "docid": "504a5058133f41dba0d6373c04c9b22c", "text": "In this thesis, I criticise traditional epistemic logics based on possible worlds semantics, inspired by Hintikka, as a framework for representing the beliefs and knowledge of agents. The traditional approach suffers from the logical omniscience problem: agents are modelled as knowing all consequences of their knowledge, which is not an admissible assumption when modelling real-world reasoning agents. My thesis proposes a new logical framework for representing the knowledge and beliefs of multiple resource-bounded agents. I begin by arguing that amendments to the possible worlds framework for modelling knowledge and belief cannot successfully overcome the logical omniscience problem in all its guises, and conclude that a so-called sentential account of belief and knowledge is to be preferred. Sentential accounts do receive support from the recent literature, but tend to conflate belief with explicit assent. In response to this problem, I consider Dennett’s intentional stance to belief ascription, holding that beliefs can only be ascribed to an agent from the point of view of a particular predictive strategy. However, Dennett’s account itself suffers from logical omniscience. I offer a sentential account of belief based on Dennett’s that avoids logical omniscience. Briefly, we should only ascribe those sentences to agents as beliefs that the agent could explicitly assent to, given its situation and an appropriate bound on the agent’s cognitive resources. In the latter half of the thesis, I concentrate on developing a logical framework that respects this philosophical account of belief. In order to capture resource-bounded reasoning, a fine-grained approach capable of modelling individual acts of inference is required. An agent’s reasoning process is modelled as a non-deterministic succession of belief states, where each new belief state differs from the previous one by a single act of inference. The logic I develop is a modal logic with many interesting model-theoretic properties and a simple, yet complete and decidable proof theory. I focus on the rule-based agent paradigm from contemporary AI, as well as investigating classical propositional reasoning-by-assumption. I investigate the complexity of the resulting logics and conclude by discussing various extensions to the framework, including more expressive languages and the handling of non-monotonic reasoning.", "title": "" }, { "docid": "84f150ecaf9fdb9a778c56a10a21de74", "text": "Peritoneal metastasis is a primary metastatic route for gastric cancers, and the mechanisms underlying this process are still unclear. Peritoneal mesothelial cells (PMCs) undergo mesothelial-to-mesenchymal transition (MMT) to provide a favorable environment for metastatic cancer cells. In this study, we investigated how the exosomal miR-21-5p induces MMT and promotes peritoneal metastasis. Gastric cancer (GC)-derived exosomes were identified by transmission electron microscopy and western blot analysis, then the uptake of exosomes was confirmed by PKH-67 staining. The expression of miR-21-5p and SMAD7 were measured by quantitative real-time polymerase chain reaction (qRT-PCR) and western blot, and the interactions between miR-21-5p and its target genes SMAD7 were confirmed by Luciferase reporter assays. The MMT of PMCs was determined by invasion assays, adhesion assays, immunofluorescent assay, and western blot. Meanwhile, mouse model of tumor peritoneal dissemination model was performed to investigate the role of exosomal miR-21-5p in peritoneal metastasis in vivo. We found that PMCs could internalize GC-derived exosomal miR-21-5p and led to increased levels of miR-21-5p in PMCs. Through various types of in vitro and in vivo assays, we confirmed that exosomal miR-21-5p was able to induce MMT of PMCs and promote tumor peritoneal metastasis. Moreover, our study revealed that this process was promoted by exosomal miR-21-5p through activating TGF-β/Smad pathway via targeting SMAD7. Altogether, our data suggest that exosomal miR-21-5p induces MMT of PMCs and promote cancer peritoneal dissemination by targeting SMAD7. The exosomal miR-21-5p may be a novel therapeutic target for GC peritoneal metastasis.", "title": "" }, { "docid": "43f3908d103ab31ab3a958c0ead9eaf8", "text": "Decision making and risk assessment are becoming a challenging task in oil and gas due to the risk related to the uncertainty and imprecision. This paper proposed a model for the risk assessment based on multi-criteria decision making (MCDM) method by integrating Fuzzy-set theory. In this model, decision makers (experts) provide their preference of risk assessment information in four categories; people, environment, asset, and reputation. A fuzzy set theory is used to evaluate likelihood, consequence and total risk level associated with each category. A case study is presented to demonstrate the proposed model. The results indicate that the proposed Fuzzy MCDM method has the potential to be used by decision makers in evaluating the risk based on multiple inputs and criteria.", "title": "" }, { "docid": "3bba595fa3a3cd42ce9b3ca052930d55", "text": "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future.", "title": "" }, { "docid": "40c93dacc8318bc440d23fedd2acbd47", "text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.", "title": "" }, { "docid": "d9e7f1461f687a4406f48e043c7a42e1", "text": "This paper addresses the design of reactive real-time embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware ASICs with embedded software. The concurrent design process for such embedded systems involves solving the specification, validation, and synthesis problems. We review the variety of approaches to these problems that have been taken.", "title": "" }, { "docid": "4e86e02be77fe4e10c199efa1e9456c4", "text": "This paper presents EsdRank, a new technique for improving ranking using external semi-structured data such as controlled vocabularies and knowledge bases. EsdRank treats vocabularies, terms and entities from external data, as objects connecting query and documents. Evidence used to link query to objects, and to rank documents are incorporated as features between query-object and object-document correspondingly. A latent listwise learning to rank algorithm, Latent-ListMLE, models the objects as latent space between query and documents, and learns how to handle all evidence in a unified procedure from document relevance judgments. EsdRank is tested in two scenarios: Using a knowledge base for web search, and using a controlled vocabulary for medical search. Experiments on TREC Web Track and OHSUMED data show significant improvements over state-of-the-art baselines.", "title": "" }, { "docid": "ddecb743bc098a3e31ca58bc17810cf1", "text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.", "title": "" }, { "docid": "8b5b4950177030e7664d57724acd52a3", "text": "With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is unwise considering that the end devices’ energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.", "title": "" }, { "docid": "42029849d1e390fabf183bf10217a609", "text": "Robustness and discrimination are two of the most important objectives in image hashing. We incorporate ring partition and invariant vector distance to image hashing algorithm for enhancing rotation robustness and discriminative capability. As ring partition is unrelated to image rotation, the statistical features that are extracted from image rings in perceptually uniform color space, i.e., CIE L*a*b* color space, are rotation invariant and stable. In particular, the Euclidean distance between vectors of these perceptual features is invariant to commonly used digital operations to images (e.g., JPEG compression, gamma correction, and brightness/contrast adjustment), which helps in making image hash compact and discriminative. We conduct experiments to evaluate the efficiency with 250 color images, and demonstrate that the proposed hashing algorithm is robust at commonly used digital operations to images. In addition, with the receiver operating characteristics curve, we illustrate that our hashing is much better than the existing popular hashing algorithms at robustness and discrimination.", "title": "" }, { "docid": "dc096631d6412e06f305f83b2c8734bc", "text": "Many important search tasks require multiple search sessions to complete. Tasks such as travel planning, large purchases, or job searches can span hours, days, or even weeks. Inevitably, life interferes, requiring the searcher either to recover the \"state\" of the search manually (most common), or plan for interruption in advance (unlikely). The goal of this work is to better understand, characterize, and automatically detect search tasks that will be continued in the near future. To this end, we analyze a query log from the Bing Web search engine to identify the types of intents, topics, and search behavior patterns associated with long-running tasks that are likely to be continued. Using our insights, we develop an effective prediction algorithm that significantly outperforms both the previous state-of-the-art method, and even the ability of human judges, to predict future task continuation. Potential applications of our techniques would allow a search engine to pre-emptively \"save state\" for a searcher (e.g., by caching search results), perform more targeted personalization, and otherwise better support the searcher experience for interrupted search tasks.", "title": "" }, { "docid": "4755611b73a75515f03df4f14ab5a323", "text": "What kinds of social media users read junk news? We examine the distribution of the most significant sources of junk news in the three months before President Donald Trump’s first State of the Union Address. Drawing on a list of sources that consistently publish political news and information that is extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms of junk news, we find that the distribution of such content is unevenly spread across the ideological spectrum. We demonstrate that (1) on Twitter, a network of Trump supporters shares the widest range of known junk news sources and circulates more junk news than all the other groups put together; (2) on Facebook, extreme hard right pages—distinct from Republican pages—share the widest range of known junk news sources and circulate more junk news than all the other audiences put together; (3) on average, the audiences for junk news on Twitter share a wider range of known junk news sources than audiences on Facebook’s public pages. POLARIZATION ON SOCIAL MEDIA Social media has become an important source of news and information in the United States. An increasing number of users consider platforms such as Twitter and Facebook a source of news. At important moments of political and military crises, social media users not only share substantial amounts of professional news, but also share extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms of junk news. News on social media also reaches users indirectly, when they browse social media for other purposes. With more than 2 billion monthly active users, Facebook is the most popular social media network. The Reuters Digital News Report 2017 finds that 71% of US respondents are on Facebook, with 48% of US respondents using it for news. Given the central role that social media play in public life, these platforms have become a target for propaganda campaigns and information operations. In its review of the recent US elections, Twitter found that more than 50,000 automated accounts were linked to Russia. Facebook has revealed that content from the Russian Internet Research Agency has reached 126 million US citizens before the 2016 presidential election. Adding to reports about foreign influence campaigns, there is increasing evidence of a rise in polarization in the US news landscape in response to the 2016 election. Trust in news is strikingly divided across ideological lines, and an ecosystem of alternative news is flourishing, fueled by extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms of junk news. At the same time, legacy publishers like the New York Times and the Washington Post have reported an increase in subscriptions. Social media algorithms can be purposefully used to distribute polarizing political content and misinformation. Pariser’s claim is that filter bubble effects—highly personalized algorithms that select what information to show in news feeds based on user preferences and behavior—have polarized public life. Vicario et al. find that misinformation on social media spreads among homogeneous and polarized groups. In January 2018, Facebook announced changes to its algorithm to prioritize trustworthy news, responding to ongoing public debate as to whether its algorithms promote junk content. Consequently, social polarization is a driver—just as much as it may be a result—of polarized social media news consumption patterns. In this study, we present a three-month study of junk news and political polarization among groups of US Twitter and Facebook users. In particular, we examine the distribution of posts and comments on public pages that contain links to junk news sources, across the political spectrum in the US. We then map the influence of central sources of junk political news and information that regularly publish content on hot button issues in the US. In particular, we consider patterns of interaction between accounts that have (i) shared junk news, (ii) and that have engaged with users who disseminate large amounts of misinformation about major political issues. SOCIAL NETWORK MAPPING Visualizing social network data is a powerful way of understanding how people share information and associate with one another. By using selected keywords, seed accounts, and known links to particular content, it is possible to construct large network visualizations. The underlying networks of", "title": "" }, { "docid": "66f20bd8c7370382f25c5a1a47065024", "text": "Detecting the road geometry at night time is an essential precondition to provide optimal illumination for the driver and the other traffic participants. In this paper we propose a novel approach to estimate the current road curvature based on three sensors: A far infrared camera, a near infrared camera and an imaging radar sensor. Various Convolutional Neural Networks with different configuration are trained for each input. By fusing the classifier responses of all three sensors, a further performance gain is achieved. To annotate the training and evaluation dataset without costly human interaction a fully automatic curvature annotation algorithm based on inertial navigation system is presented as well.", "title": "" }, { "docid": "ed39af901c58a8289229550084bc9508", "text": "Digital elevation maps are simple yet powerful representations of complex 3-D environments. These maps can be built and updated using various sensors and sensorial data processing algorithms. This paper describes a novel approach for modeling the dynamic 3-D driving environment, the particle-based dynamic elevation map, each cell in this map having, in addition to height, a probability distribution of speed in order to correctly describe moving obstacles. The dynamic elevation map is represented by a population of particles, each particle having a position, a height, and a speed. Particles move from one cell to another based on their speed vectors, and they are created, multiplied, or destroyed using an importance resampling mechanism. The importance resampling mechanism is driven by the measurement data provided by a stereovision sensor. The proposed model is highly descriptive for the driving environment, as it can easily provide an estimation of the height, speed, and occupancy of each cell in the grid. The system was proven robust and accurate in real driving scenarios, by comparison with ground truth data.", "title": "" }, { "docid": "fcaeb514732aa0a56dd8cabf8f1f2fd4", "text": "Several different factors contribute to injury severity in traffic accidents, such as driver characteristics, highway characteristics, vehicle characteristics, accidents characteristics, and atmospheric factors. This paper shows the possibility of using Bayesian Networks (BNs) to classify traffic accidents according to their injury severity. BNs are capable of making predictions without the need for pre assumptions and are used to make graphic representations of complex systems with interrelated components. This paper presents an analysis of 1536 accidents on rural highways in Spain, where 18 variables representing the aforementioned contributing factors were used to build 3 different BNs that classified the severity of accidents into slightly injured and killed or severely injured. The variables that best identify the factors that are associated with a killed or seriously injured accident (accident type, driver age, lighting and number of injuries) were identified by inference.", "title": "" } ]
scidocsrr
c206499ee7d419396f6752ce322dd65b
One switching cycle current control strategy for triple active bridge phase-shifted DC-DC converter
[ { "docid": "7f8756d40501ce704a1669598255a791", "text": "The cascaded H-bridge (CHB) topology is ideal for implementing large-scale converters for photovoltaic (PV) applications. The improved quality of output voltage waveforms, high efficiency due to transformer-less connection, and ability to employ multiple instances of a maximum power point tracking (MPPT) algorithm are just some advantages. An important disadvantage is the required over-rating to ensure balanced three-phase currents at times of unequal PV generation. Unequal generation occurs due to shading, temperature inhomogeneity, faulty H-bridges, etc. Capacitor voltage balancing under such conditions requires zero-sequence voltage injection which increases the required number of series connected H-bridges. However, leakage current and safety requirements often dictate a need for isolation between PV arrays and the cascaded converter. Therefore, this paper proposes a converter topology that avoids the cost of extra series connected H-bridges by extending the function of dc-dc converters that provide isolation. Second harmonic power oscillations seen in typical cascaded topologies can also be eliminated or reduced through use of the proposed topology. Simulation and experimental results are presented that confirm correct operation of the proposed approach.", "title": "" }, { "docid": "ef7f9c381e9d801ca97757e7dbadf439", "text": "An isolated three-port bidirectional dc-dc converter composed of three full-bridge cells and a high-frequency transformer is proposed in this paper. Besides the phase shift control managing the power flow between the ports, utilization of the duty cycle control for optimizing the system behavior is discussed and the control laws ensuring the minimum overall system losses are studied. Furthermore, the dynamic analysis and associated control design are presented. A control-oriented converter model is developed and the Bode plots of the control-output transfer functions are given. A control strategy with the decoupled power flow management is implemented to obtain fast dynamic response. Finally, a 1.5 kW prototype has been built to verify all theoretical considerations. The proposed topology and control is particularly relevant to multiple voltage electrical systems in hybrid electric vehicles and renewable energy generation systems.", "title": "" } ]
[ { "docid": "4a779f5e15cc60f131a77c69e09e54bc", "text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.", "title": "" }, { "docid": "adccd039cc54352eefd855567e8eeb62", "text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.", "title": "" }, { "docid": "cb086fa252f4db172b9c7ac7e1081955", "text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving", "title": "" }, { "docid": "0eadb0a63cc4c9a5799a8fbb7db28943", "text": "Sentiment analysis seeks to characterize opinionated or evaluative aspects of natural language text. We suggest here that appraisal expression extraction should be viewed as a fundamental task in sentiment analysis. An appraisal expression is a textual unit expressing an evaluative stance towards some target. The task is to find and characterize the evaluative attributes of such elements. This paper describes a system for effectively extracting and disambiguating adjectival appraisal expressions in English outputting a generic representation in terms of their evaluative function in the text. Data mining on appraisal expressions gives meaningful and non-obvious insights.", "title": "" }, { "docid": "0e2cb28634a20c058f985065b53d34f6", "text": "Although the construct of comfort has been analysed, diagrammed in a two-dimensional content map, and operationalized as a holistic outcome, it has not been conceptualized within the context of a broader theory for the discipline of nursing. The theoretical work presented here utilizes an intra-actional perspective to develop a theory of comfort as a positive outcome of nursing case. A model of human press is the framework within which comfort is related to (a) interventions that enhance the state of comfort and (b) desirable subsequent outcomes of nursing care. The paper concludes with a discussion about the theory of comfort as a significant one for the discipline of nursing.", "title": "" }, { "docid": "0f4ac688367d3ea43643472b7d75ffc9", "text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.", "title": "" }, { "docid": "26508379e41da5e3b38dd944fc9e4783", "text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.", "title": "" }, { "docid": "76454b3376ec556025201a2f694e1f1c", "text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.", "title": "" }, { "docid": "9b2291ef3e605d85b6d0dba326aa10ef", "text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.", "title": "" }, { "docid": "f47c60560e5dbb10eeb88e66e5fd1e52", "text": "Driver fatigue is an important factor in large number of accidents. There has been much work done in driver fatigue detection. This paper presents driver fatigue detection based on tracking the mouth and to study on monitoring and recognizing yawning. The authors proposed a method to locate and track driver’s mouth using cascade of classifiers proposed by Viola-Jones for faces. SVM is used to train the mouth and yawning images. During the fatigue detection mouth is detected from face images using cascade of classifiers. Then, SVM is used to classify the mouth and to detect yawning then alert Fatigue.", "title": "" }, { "docid": "1bd2fb70817734ec1a0e96d67ca5daaf", "text": "This paper proposed a new detection and prevention system against DDoS (Distributed Denial of Service) attack in SDN (software defined network) architecture, FL-GUARD (Floodlight-based guard system). Based on characteristics of SDN and centralized control, etc., FL-GUARD applies dynamic IP address binding to solve the problem of IP spoofing, and uses 3.3.2 C-SVM algorithm to detect attacks, and finally take advantage of the centralized control of software-defined network to issue flow tables to block attacks at the source port. The experiment results show the effectiveness of our system. The modular design of FL-GUARD lays a good foundation for the future improvement.", "title": "" }, { "docid": "a40d912e562c78ea8478b33383d837c9", "text": "Wireless mesh networking based on 802.11 wireless local area network (WLAN) has been actively explored for a few years. To improve the performance of WLAN mesh networks, a few new communication protocols have been developed in recent years. However, these solutions are usually proprietary and prevent WLAN mesh networks from interworking with each other. Thus, a standard becomes indispensable for WLAN mesh networks. To meet this need, an IEEE 802.11 task group, i.e., 802.11s, is specifying a standard for WLAN mesh networks. Although several standard drafts have been released by 802.11s, many issues still remain to be resolved. In order to understand what performance can be expected from the existing framework of 802.11s standard and what functionalities shall be added to 802.11s standard to improve performance, a detailed study on the existing 802.11s standard is given in this paper. The existing framework of 802.11s standard is first presented, followed by pointing out the challenging research issues that still exist in the current 802.11 standard. The purpose of this paper is to motivate other researchers to develop new scalable protocols for 802.11 wireless", "title": "" }, { "docid": "6f942f8ead4684f4943d1c82ea140b9a", "text": "This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90’s to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3 millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.", "title": "" }, { "docid": "7754aa9e4978b28c00a739d4918e3b3a", "text": "This paper considers two dimensional valence-arousal model. Pictorial stimuli of International Affective Picture Systems were chosen for emotion elicitation. Physiological signals like, Galvanic Skin Response, Heart Rate, Respiration Rate and Skin Temperature were measured for accessing emotional responses. The experimental procedure uses non-invasive sensors for signal collection. A group of healthy volunteers was shown four types of emotional stimuli categorized as High Valence High Arousal, High Valence Low Arousal, Low Valence High Arousal and Low Valence Low Arousal for around thirty minutes for emotion elicitation. Linear and Quadratic Discriminant Analysis are used and compared to the emotional class classification. Classification of stimuli into one of the four classes has been attempted on the basis of measurements on responses of experimental subjects. If classification is restricted within the responses of a specific individual, the classification results show high accuracy. However, if the problem is extended to entire population, the accuracy drops significantly.", "title": "" }, { "docid": "fc9e653c8958a3d08c7f190e46ee592b", "text": "In this paper, we introduce and provide a short overview of nonnegative matrix factorization (NMF). Several aspects of NMF are discussed, namely, the application in hyperspectral imaging, geometry and uniqueness of NMF solutions, complexity, algorithms, and its link with extended formulations of polyhedra. In order to put NMF into perspective, the more general problem class of constrained low-rank matrix approximation problems is first briefly introduced.", "title": "" }, { "docid": "230d3cdc0bd444bfe5c910f32bd1a109", "text": "Programming is taught as foundation module at the beginning of undergraduate studies and/or during foundation year. Learning introductory programming languages such as Pascal, Basic / C (procedural) and C++ / Java (object oriented) requires learners to understand the underlying programming paradigm, syntax, logic and the structure. Learning to program is considered hard for novice learners and it is important to understand what makes learning program so difficult and how students learn.\n The prevailing focus on multimedia learning objects provides promising approach to create better knowledge transfer. This project aims to investigate: (a) students' perception in learning to program and the difficulties. (b) effectiveness of multimedia learning objects in learning introductory programming language in a face-to-face learning environment.", "title": "" }, { "docid": "b4586447ef1536f23793651fcd9d71b8", "text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.", "title": "" }, { "docid": "884d34e85254a37eb12772a8dc663f22", "text": "We present a practical approach to address the problem of unconstrained face alignment for a single image. In our unconstrained problem, we need to deal with large shape and appearance variations under extreme head poses and rich shape deformation. To equip cascaded regressors with the capability to handle global shape variation and irregular appearance-shape relation in the unconstrained scenario, we partition the optimisation space into multiple domains of homogeneous descent, and predict a shape as a composition of estimations from multiple domain-specific regressors. With a specially formulated learning objective and a novel tree splitting function, our approach is capable of estimating a robust and meaningful composition. In addition to achieving state-of-the-art accuracy over existing approaches, our framework is also an efficient solution (350 FPS), thanks to the on-the-fly domain exclusion mechanism and the capability of leveraging the fast pixel feature.", "title": "" }, { "docid": "40229eb3a95ec25c1c3247edbcc22540", "text": "The aim of this paper is the identification of a superordinate research framework for describing emerging IT-infrastructures within manufacturing, logistics and Supply Chain Management. This is in line with the thoughts and concepts of the Internet of Things (IoT), as well as with accompanying developments, namely the Internet of Services (IoS), Mobile Computing (MC), Big Data Analytics (BD) and Digital Social Networks (DSN). Furthermore, Cyber-Physical Systems (CPS) and their enabling technologies as a fundamental component of all these research streams receive particular attention. Besides of the development of an eponymous research framework, relevant applications against the background of the technological trends as well as potential areas of interest for future research, both raised from the economic practice's perspective, are identified.", "title": "" }, { "docid": "030b25a7c93ca38dec71b301843c7366", "text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.", "title": "" } ]
scidocsrr
afc589b1deb315bc30bdaa7ae78965e0
Semi-supervised Learning with Induced Word Senses for State of the Art Word Sense Disambiguation
[ { "docid": "b9cf32ef9364f55c5f59b4c6a9626656", "text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.", "title": "" } ]
[ { "docid": "02447ce33a1fa5f8b4f156abf5d2f746", "text": "In this paper, we present TeleHuman, a cylindrical 3D display portal for life-size human telepresence. The TeleHuman 3D videoconferencing system supports 360 degree motion parallax as the viewer moves around the cylinder and optionally, stereoscopic 3D display of the remote person. We evaluated the effect of perspective cues on the conveyance of nonverbal cues in two experiments using a one-way telecommunication version of the system. The first experiment focused on how well the system preserves gaze and hand pointing cues. The second experiment evaluated how well the system conveys 3D body postural information. We compared 3 perspective conditions: a conventional 2D view, a 2D view with 360 degree motion parallax, and a stereoscopic view with 360 degree motion parallax. Results suggest the combined presence of motion parallax and stereoscopic cues significantly improved the accuracy with which participants were able to assess gaze and hand pointing cues, and to instruct others on 3D body poses. The inclusion of motion parallax and stereoscopic cues also led to significant increases in the sense of social presence and telepresence reported by participants.", "title": "" }, { "docid": "1420f07e309c114dfc264797ab82ceec", "text": "Introduction: The knowledge of clinical spectrum and epidemiological profile of critically ill children plays a significant role in the planning of health policies that would mitigate various factors related to the evolution of diseases prevalent in these sectors. The data collected enable prospective comparisons to be made with benchmark standards including regional and international units for the continuous pursuit of providing essential health care and improving the quality of patient care. Purpose: To study the clinical spectrum and epidemiological profile of the critically ill children admitted to the pediatric intensive care unit at a tertiary care center in South India. Materials and Methods: Descriptive data were collected retrospectively from the Hospital medical records between 2013 and 2016. Results: A total of 1833 patients were analyzed during the 3-year period, of which 1166 (63.6%) were males and 667 (36.4%) were females. A mean duration of stay in pediatric intensive care unit (PICU) was 2.21 ± 1.90 days. Respiratory system was the most common system affected in our study 738 (40.2 %). Acute poisoning in children constituted 99 patients (5.4%). We observed a mortality rate of 1.96%, with no association with age or sex. The mortality rate was highest in infants below 1-year of age (50%). In our study, the leading systemic cause for both admission and death was the respiratory system. Conclusion: This study analyses the epidemiological pattern of patients admitted to PICU in South India. We would also like to emphasize on public health prevention strategies and community health education which needs to be reinforced, especially in remote places and in rural India. This, in turn, would help in decreasing the cases of unknown bites, scorpion sting, poisoning and arthropod-borne illnesses, which are more prevalent in this part of the country.", "title": "" }, { "docid": "4b03aeb6c56cc25ce57282279756d1ff", "text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.", "title": "" }, { "docid": "634b30b81da7139082927109b4c22d5e", "text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.", "title": "" }, { "docid": "23190a7fed3673af72563627245d57cd", "text": "We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.", "title": "" }, { "docid": "d2a205f2a6c6deff5d9560af8cf8ff7f", "text": "MIDI files, when paired with corresponding audio recordings, can be used as ground truth for many music information retrieval tasks. We present a system which can efficiently match and align MIDI files to entries in a large corpus of audio content based solely on content, i.e., without using any metadata. The core of our approach is a convolutional network-based cross-modality hashing scheme which transforms feature matrices into sequences of vectors in a common Hamming space. Once represented in this way, we can efficiently perform large-scale dynamic time warping searches to match MIDI data to audio recordings. We evaluate our approach on the task of matching a huge corpus of MIDI files to the Million Song Dataset. 1. TRAINING DATA FOR MIR Central to the task of content-based Music Information Retrieval (MIR) is the curation of ground-truth data for tasks of interest (e.g. timestamped chord labels for automatic chord estimation, beat positions for beat tracking, prominent melody time series for melody extraction, etc.). The quantity and quality of this ground-truth is often instrumental in the success of MIR systems which utilize it as training data. Creating appropriate labels for a recording of a given song by hand typically requires person-hours on the order of the duration of the data, and so training data availability is a frequent bottleneck in content-based MIR tasks. MIDI files that are time-aligned to matching audio can provide ground-truth information [8,25] and can be utilized in score-informed source separation systems [9, 10]. A MIDI file can serve as a timed sequence of note annotations (a “piano roll”). It is much easier to estimate information such as beat locations, chord labels, or predominant melody from these representations than from an audio signal. A number of tools have been developed for inferring this kind of information from MIDI files [6, 7, 17, 19]. Halevy et al. [11] argue that some of the biggest successes in machine learning came about because “...a large training set of the input-output behavior that we seek to automate is available to us in the wild.” The motivation behind c Colin Raffel, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Daniel P. W. Ellis. “LargeScale Content-Based Matching of MIDI and Audio Files”, 16th International Society for Music Information Retrieval Conference, 2015. J/Jerseygi.mid", "title": "" }, { "docid": "3079e9dc5846c73c57f8d7fbf35d94a1", "text": "Data mining techniques is rapidly increasing in the research of educational domains. Educational data mining aims to discover hidden knowledge and patterns about student performance. This paper proposes a student performance prediction model by applying two classification algorithms: KNN and Naïve Bayes on educational data set of secondary schools, collected from the ministry of education in Gaza Strip for 2015 year. The main objective of such classification may help the ministry of education to improve the performance due to early prediction of student performance. Teachers also can take the proper evaluation to improve student learning. The experimental results show that Naïve Bayes is better than KNN by receiving the highest accuracy value of 93.6%.", "title": "" }, { "docid": "a91add591aacaa333e109d77576ba463", "text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.", "title": "" }, { "docid": "63755caaaad89e0ef6a687bb5977f5de", "text": "rotor field orientation stator field orientation stator model rotor model MRAS, observers, Kalman filter parasitic properties field angle estimation Abstract — Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor, the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open loop estimators or closed loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors.", "title": "" }, { "docid": "a33e8a616955971014ceea9da1e8fcbe", "text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.", "title": "" }, { "docid": "a975ca76af34f5911191efa72d7f583c", "text": "Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution (SIS) and learning with errors (LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications. C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends © in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. DOI: 10.1561/0400000074. Full text available at: http://dx.doi.org/10.1561/0400000074", "title": "" }, { "docid": "bafdfa2ecaeb18890ab8207ef1bc4f82", "text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: ppan@astate.edu, polinpanpp@gmail.com (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh", "title": "" }, { "docid": "455e3f0c6f755d78ecafcdff14c46014", "text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.", "title": "" }, { "docid": "c7a15659f2fe5f67da39b77a3eb19549", "text": "Privacy breaches and their regulatory implications have attracted corporate attention in recent times. An often overlooked cause of privacy breaches is human error. In this study, we first apply a model based on the widely accepted GEMS error typology to analyze publicly reported privacy breach incidents within the U.S. Then, based on an examination of the causes of the reported privacy breach incidents, we propose a defense-in-depth solution strategy founded on error avoidance, error interception, and error correction. Finally, we illustrate the application of the proposed strategy to managing human error in the case of the two leading causes of privacy breach incidents. This study finds that mistakes in the information processing stage constitute the most cases of human errorrelated privacy breach incidents, clearly highlighting the need for effective policies and their enforcement in organizations. a 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9a75dde1045b317d06f84b708f45bde2", "text": "News and twitter are sometimes closely correlated, while sometimes each of them has quite independent flow of information, due to the difference of the concerns of their information sources. In order to effectively capture the nature of those two text streams, it is very important to model both their correlation and their difference. This paper first models their correlation by applying a time series topic model to the document stream of the mixture of time series news and twitter. Next, we divide news streams and twitter into distinct two series of document streams, and then we apply our model of bursty topic detection based on the Kleinberg’s burst detection model. This approach successfully models the difference of the two time series topic models of news and twitter as each having independent information source and its own concern.", "title": "" }, { "docid": "2214493b373886c02f67ad9e411cfe66", "text": "We identify emerging phenomena of distributed liveness, involving new relationships among performers, audiences, and technology. Liveness is a recent, technology-based construct, which refers to experiencing an event in real-time with the possibility for shared social realities. Distributed liveness entails multiple forms of physical, spatial, and social co-presence between performers and audiences across physical and virtual spaces. We interviewed expert performers about how they experience liveness in physically co-present and distributed settings. Findings show that distributed performances and technology need to support flexible social co-presence and new methods for sensing subtle audience responses and conveying engagement abstractly.", "title": "" }, { "docid": "094bb78ae482f2ad4877e53a446236f0", "text": "While the amount of available information on the Web is increasing rapidly, the problem of managing it becomes more difficult. We present two applications, Thinkbase and Thinkpedia, which aim to make Web content more accessible and usable by utilizing visualizations of the semantic graph as a means to navigate and explore large knowledge repositories. Both of our applications implement a similar concept: They extract semantically enriched contents from a large knowledge spaces (Freebase and Wikipedia respectively), create an interactive graph-based representation out of it, and combine them into one interface together with the original text based content. We describe the design and implementation of our applications, and provide a discussion based on an informal evaluation. Author", "title": "" }, { "docid": "933e51f6d297ecb1393688f4165079e1", "text": "Image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage approach, feature learning and clustering, sequentially or jointly. We observe that these works usually focus on the combination of reconstruction loss and clustering loss, relatively little work has focused on improving the learning representation of the neural network for clustering. In this paper, we propose a deep convolutional embedded clustering algorithm with inception-like block (DCECI). Specifically, an inception-like block with different type of convolution filters are introduced in the symmetric deep convolutional network to preserve the local structure of convolution layers. We simultaneously minimize the reconstruction loss of the convolutional autoencoders with inception-like block and the clustering loss. Experimental results on multiple image datasets exhibit the promising performance of our proposed algorithm compared with other competitive methods.", "title": "" }, { "docid": "590931691f16239904733befab24e70a", "text": "In a neural network, neuron computation is achieved through the summation of input signals fed by synaptic connections. The synaptic activity (weight) is dictated by the synchronous firing of neurons, inducing potentiation/depression of the synaptic connection. This learning function can be supported by the resistive switching memory (RRAM), which changes its resistance depending on the amplitude, the pulse width and the bias polarity of the applied signal. This work shows a new synapse circuit comprising a MOS transistor as a selector and a RRAM as a variable resistance, displaying spike-timing dependent plasticity (STDP) similar to the one originally experienced in biological neural networks. We demonstrate long-term potentiation and long-term depression by simulations with an analytical model of resistive switching. Finally, the experimental demonstration of the new STDP scheme is presented.", "title": "" }, { "docid": "4646848b959a356bb4d7c0ef14d53c2c", "text": "Consumerization of IT (CoIT) is a key trend affecting society at large, including organizations of all kinds. A consensus about the defining aspects of CoIT has not yet been reached. Some refer to CoIT as employees bringing their own devices and technologies to work, while others highlight different aspects. While the debate about the nature and consequences of CoIT is still ongoing, many definitions have already been proposed. In this paper, we review these definitions and what is known about CoIT thus far. To guide future empirical research in this emerging area, we also review several established theories that have not yet been applied to CoIT but in our opinion have the potential to shed a deeper understanding on CoIT and its consequences. We discuss which elements of the reviewed theories are particularly relevant for understanding CoIT and thereby provide targeted guidance for future empirical research employing these theories. Overall, our paper may provide a useful starting point for addressing the lack of theorization in the emerging CoIT literature stream and stimulate discussion about theorizing CoIT.", "title": "" } ]
scidocsrr
3644efec15cf1dfc6d3a999022ba2de1
MakerShoe: towards a wearable e-textile construction kit to support creativity, playful making, and self-expression
[ { "docid": "1c5cba8f3533880b19e9ef98a296ef57", "text": "Internal organs are hidden and untouchable, making it difficult for children to learn their size, position, and function. Traditionally, human anatomy (body form) and physiology (body function) are taught using techniques ranging from worksheets to three-dimensional models. We present a new approach called BodyVis, an e-textile shirt that combines biometric sensing and wearable visualizations to reveal otherwise invisible body parts and functions. We describe our 15-month iterative design process including lessons learned through the development of three prototypes using participatory design and two evaluations of the final prototype: a design probe interview with seven elementary school teachers and three single-session deployments in after-school programs. Our findings have implications for the growing area of wearables and tangibles for learning.", "title": "" } ]
[ { "docid": "dd11a04de8288feba2b339cca80de41c", "text": "A methodology for the automatic design optimization of analog circuits is presented. A non-fixed topology approach is followed. A symbolic simulator, called ISAAC, generates an analytic AC model for any analog circuit, time-continuous or time-discrete, CMOS or bipolar. ISAAC's expressions can be fully symbolic or mixed numeric-symbolic, exact or simplified. The model is passed to the design optimization program OPTIMAN. For a user selected circuit topology, the independent design variables are automatically extracted and OPTIMAN sizes all elements to satisfy the performance constraints, thereby optimizing a user defined design objective. The optimization algorithm is simulated annealing. Practical examples show that OPTIMAN quickly designs analog circuits, closely meeting the specifications, and that it is a flexible and reliable design and exploration tool.", "title": "" }, { "docid": "d337f149d3e52079c56731f4f3d8ea3e", "text": "Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.", "title": "" }, { "docid": "5b41a7c287b54b16e9d791cb62d7aa5a", "text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.", "title": "" }, { "docid": "b9d8ea80169ac5a5c48fd631c9d5625a", "text": "Deep convolutional networks have achieved great success for image recognition. However, for action recognition in videos, their advantage over traditional methods is not so evident. We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structures with a new segment-based sampling and aggregation module. This unique design enables our TSN to efficiently learn action models by using the whole action videos. The learned models could be easily adapted for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the instantiation of TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on four challenging action recognition benchmarks: HMDB51 (71.0%), UCF101 (94.9%), THUMOS14 (80.1%), and ActivityNet v1.2 (89.6%). Using the proposed RGB difference for motion models, our method can still achieve competitive accuracy on UCF101 (91.0 %) while running at 340 FPS. Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.", "title": "" }, { "docid": "58b121012d9772285af95520fab7eaa0", "text": "We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.", "title": "" }, { "docid": "fe0f26af4a4c3a67b640e8f700b75fba", "text": "Taxi ridesharing can be of significant social and environmental benefit, e.g. by saving energy consumption and satisfying people's commute needs. Despite the great potential, taxi ridesharing, especially with dynamic queries, is not well studied. In this paper, we formally define the dynamic ridesharing problem and propose a large-scale taxi ridesharing service. It efficiently serves real-time requests sent by taxi users and generates ridesharing schedules that reduce the total travel distance significantly. In our method, we first propose a taxi searching algorithm using a spatio-temporal index to quickly retrieve candidate taxis that are likely to satisfy a user query. A scheduling algorithm is then proposed. It checks each candidate taxi and inserts the query's trip into the schedule of the taxi which satisfies the query with minimum additional incurred travel distance. To tackle the heavy computational load, a lazy shortest path calculation strategy is devised to speed up the scheduling algorithm. We evaluated our service using a GPS trajectory dataset generated by over 33,000 taxis during a period of 3 months. By learning the spatio-temporal distributions of real user queries from this dataset, we built an experimental platform that simulates user real behaviours in taking a taxi. Tested on this platform with extensive experiments, our approach demonstrated its efficiency, effectiveness, and scalability. For example, our proposed service serves 25% additional taxi users while saving 13% travel distance compared with no-ridesharing (when the ratio of the number of queries to that of taxis is 6).", "title": "" }, { "docid": "dd9d4b47ea4a43a5c228f7b0abb0ddd1", "text": "Due to the growing popularity of Description Logics-based knowledge representation systems, predominantly in the context of Semantic Web applications, there is a rising demand for tools offering non-standard reasoning services. One particularly interesting form of reasoning, both from the user as well as the ontology engineering perspective, is abduction. In this paper we introduce two novel reasoning calculi for solving ABox abduction problems in the Description Logic ALC, i.e. problems of finding minimal sets of ABox axioms, which when added to the knowledge base enforce entailment of a requested set of assertions. The algorithms are based on regular connection tableaux and resolution with set-of-support and are proven to be sound and complete. We elaborate on a number of technical issues involved and discuss some practical aspects of reasoning with the methods.", "title": "" }, { "docid": "355591ece281540fb696c1eff3df5698", "text": "Online health communities are a valuable source of information for patients and physicians. However, such user-generated resources are often plagued by inaccuracies and misinformation. In this work we propose a method for automatically establishing the credibility of user-generated medical statements and the trustworthiness of their authors by exploiting linguistic cues and distant supervision from expert sources. To this end we introduce a probabilistic graphical model that jointly learns user trustworthiness, statement credibility, and language objectivity.\n We apply this methodology to the task of extracting rare or unknown side-effects of medical drugs --- this being one of the problems where large scale non-expert data has the potential to complement expert medical knowledge. We show that our method can reliably extract side-effects and filter out false statements, while identifying trustworthy users that are likely to contribute valuable medical information.", "title": "" }, { "docid": "f6a24aa476ec27b86e549af6d30f22b6", "text": "Designing autonomous robotic systems able to manipulate deformable objects without human intervention constitutes a challenging area of research. The complexity of interactions between a robot manipulator and a deformable object originates from a wide range of deformation characteristics that have an impact on varying degrees of freedom. Such sophisticated interaction can only take place with the assistance of intelligent multisensory systems that combine vision data with force and tactile measurements. Hence, several issues must be considered at the robotic and sensory levels to develop genuine dexterous robotic manipulators for deformable objects. This chapter presents a thorough examination of the modern concepts developed by the robotic community related to deformable objects grasping and manipulation. Since the convention widely adopted in the literature is often to extend algorithms originally proposed for rigid objects, a comprehensive coverage on the new trends on rigid objects manipulation is initially proposed. State-of-the-art techniques on robotic interaction with deformable objects are then examined and discussed. The chapter proposes a critical evaluation of the manipulation algorithms, the instrumentation systems adopted and the examination of end-effector technologies, including dexterous robotic hands. The motivation for this review is to provide an extensive appreciation of state-of-the-art solutions to help researchers and developers determine the best possible options when designing autonomous robotic systems to interact with deformable objects. Typically in a robotic setup, when robot manipulators are programmed to perform their tasks, they must have a complete knowledge about the exact structure of the manipulated object (shape, surface texture, rigidity) and about its location in the environment (pose). For some of these tasks, the manipulator becomes in contact with the object. Hence, interaction forces and moments are developed and consequently these interaction forces and moments, as well as the position of the end-effector, must be controlled, which leads to the concept of “force controlled manipulation” (Natale, 2003). There are different control strategies used in 28", "title": "" }, { "docid": "494a0d57cb905f75428022ba030c225c", "text": "Recent studies have demonstrated a relationship between fructose consumption and risk of developing metabolic syndrome. Mechanisms by which dietary fructose mediates metabolic changes are poorly understood. This study compared the effects of fructose, glucose and sucrose consumption on post-postprandial lipemia and low grade inflammation measured as hs-CRP. This was a randomized, single blinded, cross-over trial involving healthy subjects (n = 14). After an overnight fast, participants were given one of 3 different isocaloric drinks, containing 50 g of either fructose or glucose or sucrose dissolved in water. Blood samples were collected at baseline, 30, 60 and 120 minutes post intervention for the analysis of blood lipids, glucose, insulin and high sensitivity C-reactive protein (hs-CRP). Glucose and sucrose supplementation initially resulted in a significant increase in glucose and insulin levels compared to fructose supplementation and returned to near baseline values within 2 hours. Change in plasma cholesterol, LDL and HDL-cholesterol (measured as area under curve, AUC) was significantly higher when participants consumed fructose compared with glucose or sucrose (P < 0.05). AUC for plasma triglyceride levels however remained unchanged regardless of the dietary intervention. Change in AUC for hs-CRP was also significantly higher in subjects consuming fructose compared with those consuming glucose (P < 0.05), but not sucrose (P = 0.07). This study demonstrates that fructose as a sole source of energy modulates plasma lipids and hsCRP levels in healthy individuals. The significance of increase in HDL-cholesterol with a concurrent increase in LDL-cholesterol and elevated hs-CRP levels remains to be delineated when considering health effects of feeding fructose-rich diets. ACTRN 12614000431628", "title": "" }, { "docid": "11229bf95164064f954c25681c684a16", "text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.", "title": "" }, { "docid": "8920d6f0faa1f46ca97306f4d59897d9", "text": "Tactile augmentation is a simple, safe, inexpensive interaction technique for adding physical texture and force feedback cues to virtual objects. This study explored whether virtual reality (VR) exposure therapy reduces fear of spiders and whether giving patients the illusion of physically touching the virtual spider increases treatment effectiveness. Eight clinically phobic students were randomly assigned to one of 3 groups—(a) no treatment, (b) VR with no tactile cues, or (c) VR with a physically “touchable” virtual spider—as were 28 nonclinically phobic students. Participants in the 2 VR treatment groups received three 1-hr exposure therapy sessions resulting in clinically significant drops in behavioral avoidance and subjective fear ratings. The tactile augmentation group showed the greatest progress on behavioral measures. On average, participants in this group, who only approached to 5.5 ft of a live spider on the pretreatment Behavioral Avoidance Test (Garcia-Palacios, 2002), were able to approach to 6 in. of the spider after VR exposure treatment and did so with much less anxiety (see www.vrpain.com for details). Practical implications are discussed. INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 16(2), 283–300 Copyright © 2003, Lawrence Erlbaum Associates, Inc.", "title": "" }, { "docid": "2d7963a209ec1c7f38c206a0945a1a7e", "text": "We present a system which enables a user to remove a le from both the le system and all the backup tapes on which the le is stored. The ability to remove les from all backup tapes is desirable in many cases. Our system erases information from the backup tape without actually writing on the tape. This is achieved by applying cryptography in a new way: a block cipher is used to enable the system to \\forget\" information rather than protect it. Our system is easy to install and is transparent to the end user. Further, it introduces no slowdown in system performance and little slowdown in the backup procedure.", "title": "" }, { "docid": "147b207125fcda1dece25a6c5cd17318", "text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.", "title": "" }, { "docid": "2cd6601a4cc0f1e50452869fb111e077", "text": "P300-based Guilty Knowledge Test (GKT) has been suggested as an alternative approach for conventional polygraphy. The purpose of this study was to extend a previously introduced pattern recognition method for the ERP assessment in this application. This extension was done by the further extending the feature set and also the employing a method for the selection of optimal features. For the evaluation of the method, several subjects went through the designed GKT paradigm and their respective brain signals were recorded. Next, a P300 detection approach based on some features and a statistical classifier was implemented. The optimal feature set was selected using a genetic algorithm from a primary feature set including some morphological, frequency and wavelet features and was used for the classification of the data. The rates of correct detection in guilty and innocent subjects were 86%, which was better than other previously used methods.", "title": "" }, { "docid": "93e2f88d13fc69fc11cd70fbe9685c2f", "text": "(1) Robotics and Automation Laboratory, Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd. Prathumwan, Bangkok 10330 http://161.200.80.142/mech, Thailand Abstract A method for kinematics modeling of a six-wheel Rocker-Bogie mobile robot is described in detail. The forward kinematics is derived by using wheel Jacobian matrices in conjunction with wheel-ground contact angle estimation. The inverse kinematics is to obtain the wheel velocities and steering angles from the desired forward velocity and turning rate of the robot. Traction Control is also developed to improve traction by comparing information from onboard sensors and wheel velocities to minimize wheel slip. Finally, a simulation of a small robot using rockerbogie suspension has been performed and simulate in two conditions of surfaces including climbing slope and travel over a ditch.", "title": "" }, { "docid": "d65aa05f6eb97907fe436ff50628a916", "text": "The process of stool transfer from healthy donors to the sick, known as faecal microbiota transplantation (FMT), has an ancient history. However, only recently researchers started investigating its applications in an evidence-based manner. Current knowledge of the microbiome, the concept of dysbiosis and results of preliminary research suggest that there is an association between gastrointestinal bacterial disruption and certain disorders. Researchers have studied the effects of FMT on various gastrointestinal and non-gastrointestinal diseases, but have been unable to precisely pinpoint specific bacterial strains responsible for the observed clinical improvement or futility of the process. The strongest available data support the efficacy of FMT in the treatment of recurrent Clostridium difficile infection with cure rates reported as high as 90% in clinical trials. The use of FMT in other conditions including inflammatory bowel disease, functional gastrointestinal disorders, obesity and metabolic syndrome is still controversial. Results from clinical studies are conflicting, which reflects the gap in our knowledge of the microbiome composition and function, and highlights the need for a more defined and personalised microbial isolation and transfer.", "title": "" }, { "docid": "4b8cd508689eb4cfe4423bf1b30bce3e", "text": "A two-dimensional (2D) periodic leaky-wave antenna consisting of a periodic distribution of rectangular patches on a grounded dielectric substrate, excited by a narrow slot in the ground plane, is studied here. The TM0 surface wave that is normally supported by a grounded dielectric substrate is perturbed by the presence of the periodic patches to produce radially-propagating leaky waves. In addition to making a novel microwave antenna structure, this design is motivated by the phenomena of directive beaming and enhanced transmission observed in plasmonic structures in the optical regime.", "title": "" }, { "docid": "5d4797cffc06cbde079bf4019dc196db", "text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)&#x2014;a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "title": "" }, { "docid": "e8010fdc14ace06ffad91561694dd310", "text": "This paper describes the performance comparison of a wind power systems based on two different induction generators as well as the experimental demonstration of a wind turbine simulator for the maximum power extraction. The two induction machines studied for the comparison are the squirrel-cage induction generator (SCIG) and the doubly fed induction generator (DFIG). The techniques of direct grid integration, independent power control, and the droop phenomenon of distribution line are studied and compared between the SCIG and DFIG systems. Both systems are modeled in Matlab/Simulink environment, and the operation is tested for the wind turbine maximum power extraction algorithm results. Based on the simulated wind turbine parameters, a commercial induction motor drive was programmed to emulate the wind turbine and is coupled to the experimental generator systems. The turbine experimental results matched well with the theoretical turbine operation.", "title": "" } ]
scidocsrr
b0d14b3af57f99b48533a6da0954d854
Systematic Planning for ICT Integration in Topic Learning
[ { "docid": "89c7518d9e0bd7eac7d4a0e1983fe0fc", "text": "Technology such as Information and Communication Technology (ICT) is a potent force in driving economic, social, political and educational reforms. Countries, particularly developing ones, cannot afford to stay passive to ICT if they are to compete and strive in the global economy. The health of the economy of any country, poor or rich, developed or developing, depends substantially on the level and quality of the education it provides to its workforce. Education reform is occurring throughout the world and one of the tenets of the reform is the introduction and integration of ICT in the education system. The successful integration of any technology, thus ICT, into the classroom warrants careful planning and depends largely on how well policy makers understand and appreciate the dynamics of such integration. This paper offers a set of guidelines to policy makers for the successful integration of ICT into the classroom.", "title": "" }, { "docid": "d994b23ea551f23215232c0771e7d6b3", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" } ]
[ { "docid": "7765be2199056aed0cb463d215363f83", "text": "This paper describes a machine learning approach for extracting automatically the tongue contour in ultrasound images. This method is developed in the context of visual articulatory biofeedback for speech therapy. The goal is to provide a speaker with an intuitive visualization of his/her tongue movement, in real-time, and with minimum human intervention. Contrary to most widely used techniques based on active contours, the proposed method aims at exploiting the information of all image pixels to infer the tongue contour. For that purpose, a compact representation of each image is extracted using a PCA-based decomposition technique (named EigenTongue). Artificial neural networks are then used to convert the extracted visual features into control parameters of a PCA-based tongue contour model. The proposed method is evaluated on 9 speakers, using data recorded with the ultrasound probe hold manually (as in the targeted application). Speaker-dependent experiments demonstrated the effectiveness of the proposed method (with an average error of ~1.3 mm when training from 80 manually annotated images), even when the tongue contour is poorly imaged. The performance was significantly lower in speaker-independent experiments (i.e. when estimating contours on an unknown speaker), likely due to anatomical differences across speakers.", "title": "" }, { "docid": "7a5626554753733d8f20a15fe161beca", "text": "Worldwide, lung cancer is the most common cause of major cancer incidence and mortality in men, whereas in women it is the third most common cause of cancer incidence and the second most common cause of cancer mortality. In 2010 the American Cancer Society estimated that lung cancer would account for more than 222,520 new cases in the United States during 2010 and 157,300 cancer deaths. Although lung cancer incidence in the United States began to decline in men in the early 1980s, it seems to have plateaued in women. Lung cancer can be diagnosed pathologically either by a histologic or cytologic approach. The new International Association for the Study of Lung Cancer (IASLC)/American Thoracic Society (ATS)/ European Respiratory Society (ERS) Lung Adenocarcinoma Classification has made major changes in how lung adenocarcinoma is diagnosed. It will significantly alter the structure of the previous 2004 World Health Organization (WHO) classification of lung tumors (Box 1). Not only does it address classification in resectionspecimens (seeBox1), but it also makes recommendations applicable to small biopsies and cytology specimens, for diagnostic termsandcriteria forothermajor histologic subtypes inaddition to adenocarcinoma (Table1). The4major histologic types of lung cancer are squamous cell carcinoma, adenocarcinoma, small cell carcinoma, and large cell carcinoma. These major types can be subclassified into more specific subtypes such as lepidic predominant subtype of adenocarcinoma or the basaloid variant of large cell carcinoma.More detailed reviews of the pathology, cytology, and molecular biology of lung cancer can be found elsewhere.", "title": "" }, { "docid": "8bbbaab2cf7825ca98937de14908e655", "text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.", "title": "" }, { "docid": "6f650989dff7b4aaa76f051985c185bf", "text": "Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of ‘valid’ paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli’s interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.’s algorithm for detecting obfuscated calls in x86 binaries. Experimental results from comparing context insensitive, Sharir and Pnueli’s callingcontext-sensitive, and stack-context-sensitive versions of the algorithm are presented.", "title": "" }, { "docid": "2a43e164e536600ee6ceaf6a9c1af1be", "text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.", "title": "" }, { "docid": "a55b44543510713a7fdc4f7cb8c123b2", "text": "The mechanisms that allow cancer cells to adapt to the typical tumor microenvironment of low oxygen and glucose and high lactate are not well understood. GPR81 is a lactate receptor recently identified in adipose and muscle cells that has not been investigated in cancer. In the current study, we examined GPR81 expression and function in cancer cells. We found that GPR81 was present in colon, breast, lung, hepatocellular, salivary gland, cervical, and pancreatic carcinoma cell lines. Examination of tumors resected from patients with pancreatic cancer indicated that 94% (148 of 158) expressed high levels of GPR81. Functionally, we observed that the reduction of GPR81 levels using shRNA-mediated silencing had little effect on pancreatic cancer cells cultured in high glucose, but led to the rapid death of cancer cells cultured in conditions of low glucose supplemented with lactate. We also observed that lactate addition to culture media induced the expression of genes involved in lactate metabolism, including monocarboxylase transporters in control, but not in GPR81-silenced cells. In vivo, GPR81 expression levels correlated with the rate of pancreatic cancer tumor growth and metastasis. Cells in which GPR81 was silenced showed a dramatic decrease in growth and metastasis. Implantation of cancer cells in vivo was also observed to lead to greatly elevated levels of GPR81. These data support that GPR81 is important for cancer cell regulation of lactate transport mechanisms. Furthermore, lactate transport is important for the survival of cancer cells in the tumor microenvironment. Cancer Res; 74(18); 5301-10. ©2014 AACR.", "title": "" }, { "docid": "63914ebf92c3c4d84df96f9b965bea5b", "text": "In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are always more effective than the others.", "title": "" }, { "docid": "76ebe7821ae75b50116d6ac3f156e571", "text": "Since the financial crisis in 2008 organisations have been forced to rethink their risk management. Therefore entities have changed from silo-based Traditional Risk Management to the overarching framework Enterprise Risk Management. Yet Enterprise Risk Management is a young model and it has to contend with various challenges. At the moment there are just a few research papers but they claim that this approach is reasonable. The two frameworks COSO and GRC try to support Enterprise Risk Management. Research does not provide studies about their efficiency. The challenges of Enterprise Risk Management are the composition of the system, suitable metrics, the human factor and the complex environment.", "title": "" }, { "docid": "629b63889e43ee1fce3c6c850342428e", "text": "Purpose – This paper aims to survey the web sites of the academic libraries of the Association of Research Libraries (USA) regarding the adoption of Web 2.0 technologies. Design/methodology/approach – The websites of 100 member academic libraries of the Association of Research Libraries (USA) were surveyed. Findings – All libraries were found to be using various tools of Web 2.0. Blogs, microblogs, RSS, instant messaging, social networking sites, mashups, podcasts, and vodcasts were widely adopted, while wikis, photo sharing, presentation sharing, virtual worlds, customized webpage and vertical search engines were used less. Libraries were using these tools for sharing news, marketing their services, providing information literacy instruction, providing information about print and digital resources, and soliciting feedback of users. Originality/value – The paper is useful for future planning of Web 2.0 use in academic libraries.", "title": "" }, { "docid": "78e1e3c496986a669a5a118f095424f5", "text": "A serms of increasingly accurate algorithms to obtain approximate solutions to the 0/1 one-dlmensmnal knapsack problem :s presented Each algorithm guarantees a certain minimal closeness to the optimal solution value The approximate algorithms are of polynomml time complexity and reqmre only linear storage Computatmnal expermnce with these algorithms is also presented", "title": "" }, { "docid": "d62bded822aff38333a212ed1853b53c", "text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated", "title": "" }, { "docid": "7f46fd1f61c3e0d158af401ee88a2586", "text": "Sentiment analysis becomes a very active research area in the text mining field. It aims to extract people's opinions, sentiments, and subjectivity from the texts. Sentiment analysis can be performed at three levels: at document level, at sentence level and at aspect level. An important part of research effort focuses on document level sentiment classification, including works on opinion classification of reviews. This survey paper tackles a comprehensive overview of the last update of sentiment analysis at document level. The main target of this survey is to give nearly full image of sentiment analysis application, challenges and techniques at this level. In addition, some future research issues are also presented.", "title": "" }, { "docid": "c283e7b1133fe0898e5d953c751d6d85", "text": "Fasting has been practiced for millennia, but, only recently, studies have shed light on its role in adaptive cellular responses that reduce oxidative damage and inflammation, optimize energy metabolism, and bolster cellular protection. In lower eukaryotes, chronic fasting extends longevity, in part, by reprogramming metabolic and stress resistance pathways. In rodents intermittent or periodic fasting protects against diabetes, cancers, heart disease, and neurodegeneration, while in humans it helps reduce obesity, hypertension, asthma, and rheumatoid arthritis. Thus, fasting has the potential to delay aging and help prevent and treat diseases while minimizing the side effects caused by chronic dietary interventions.", "title": "" }, { "docid": "3306636800566050599f051b0939b755", "text": "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network-joint network with the CNN for ImageQA and the parameter prediction network-is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.", "title": "" }, { "docid": "28d573b9b32a8f95618a01f1e5e43a01", "text": "When trying to satisfy an information need, smartphone users frequently transition from mobile search engines to mobile apps and vice versa. However, little is known about the nature of these transitions nor how mobile search and mobile apps interact. We report on a 2-week, mixed-method study involving 18 Android users, where we collected real-world mobile search and mobile app usage data alongside subjective insights on why certain interactions between apps and mobile search occur. Our results show that when people engage with mobile search they tend to interact with more mobile apps and for longer durations. We found that certain categories of apps are used more intensely alongside mobile search. Furthermore we found differences in app usage before and after mobile search and show how mobile app interactions can both prompt mobile search and enable users to take action. We conclude with a discussion on what these patterns mean for mobile search and how we might design mobile search experiences that take these app interactions into account.", "title": "" }, { "docid": "ac3e3f0059d16b4cee7d876ec0608dae", "text": "We present the Interaction Technique Markup Language (InTml), a profile on top of the core X3D that describes 3D interaction techniques (InTs) and hardware platforms. InTml makes 3D InTs easier to understand, compare, and integrate in complete virtual reality (VR) applications. InTml can be used as a front end for any VR toolkit, so InTml documents that plug together 3D InTs, VR objects, and devices can be fully described and executed.", "title": "" }, { "docid": "061c67c967818b1a0ad8da55345c6dcf", "text": "The paper aims at revealing the essence and connotation of Computational Thinking. It analyzed some of the international academia’s research results of Computational Thinking. The author thinks Computational Thinking is discipline thinking or computing philosophy, and it is very critical to understand Computational Thinking to grasp the thinking’ s computational features and the computing’s thinking attributes. He presents the basic rules of screening the representative terms of Computational Thinking and lists some representative terms based on the rules. He thinks Computational Thinking is contained in the commonalities of those terms. The typical thoughts of Computational Thinking are structuralization, formalization, association-and-interaction, optimization and reuse-and-sharing. Training Computational Thinking must base on the representative terms and the typical thoughts. There are three innovations in the paper: the five rules of screening the representative terms, the five typical thoughts and the formalized description of Computational Thinking.", "title": "" }, { "docid": "9a8918cb818a12c3da2e5d210d8b9d43", "text": "In the process of information system optimization and upgrading in the era of cloud computing, the number and variety of business requirements are increasingly complex, keeps sustained growth, the process of continuous integration delivery of information systems becomes increasingly complex, the amount of repetitive work is growing. This paper focuses on the continuous integration of specific information systems, a collaborative work scheme for continuous integrated delivery based on Jenkins and Ansible is proposed. Both theory and practice show that continuous integrated delivery cooperative systems can effectively improve the efficiency and quality of continuous integrated delivery of information systems. The effect of the optimization and upgrading of the information system is obvious.", "title": "" }, { "docid": "18d4a0b3b6eceb110b6eb13fde6981c7", "text": "We simulate the growth of a benign avascular tumour embedded in normal tissue, including cell sorting that occurs between tumour and normal cells, due to the variation of adhesion between diierent cell types. The simulation uses the Potts Model, an energy minimisation method. Trial random movements of cell walls are checked to see if they reduce the adhesion energy of the tissue. These trials are then accepted with Boltzmann weighted probability. The simulated tumour initially grows exponentially, then forms three concentric shells as the nutrient level supplied to the core by diiusion decreases: the outer shell consists of live proliferating cells, the middle of quiescent cells and the centre is a necrotic core, where the nutrient concentration is below the critical level that sustains life. The growth rate of the tumour decreases at the onset of shell formation in agreement with experimental observation. The tumour eventually approaches a steady state, where the increase in volume due to the growth of the proliferating cells equals the loss of volume due to the disintegration of cells in the necrotic core. The nal thickness of the shells also agrees with experiment.", "title": "" }, { "docid": "d9a2535603192d798d3daf45591cbe58", "text": "This paper sets China’s education of English majors within the changing global and national context. It examines the impact of accelerating globalisation and the rise of global English, the adjustment of China’s English language policy, the growth of the education of English majors and the challenges faced by this sector of education. To adapt to the changes, efforts have been made to change the training models, revise the national curriculum and update textbooks. The introduction of six new training models is significant: “English major plus courses in other specialisms”, “English major plus an orientation towards other disciplines”, “English major plus a minor”, “A major plus English language”, “English language plus another foreign language”, and “Dual degree: BA degree of English language and literature plus another BA degree”. Turning out ‘composite-type’ graduates has become a training objective of the curriculum for English majors, with consequent implications for the future development of this sector of education in China.", "title": "" } ]
scidocsrr
8e01e82f5affbb6f12a7122d68f89bd7
From high heels to weed attics: a syntactic investigation of chick lit and literature
[ { "docid": "ab677299ffa1e6ae0f65daf5de75d66c", "text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.", "title": "" } ]
[ { "docid": "b1ef897890df4c719d85dd339f8dee70", "text": "Repositories of health records are collections of events with varying number and sparsity of occurrences within and among patients. Although a large number of predictive models have been proposed in the last decade, they are not yet able to simultaneously capture cross-attribute and temporal dependencies associated with these repositories. Two major streams of predictive models can be found. On one hand, deterministic models rely on compact subsets of discriminative events to anticipate medical conditions. On the other hand, generative models offer a more complete and noise-tolerant view based on the likelihood of the testing arrangements of events to discriminate a particular outcome. However, despite the relevance of generative predictive models, they are not easily extensible to deal with complex grids of events. In this work, we rely on the Markov assumption to propose new predictive models able to deal with cross-attribute and temporal dependencies. Experimental results hold evidence for the utility and superior accuracy of generative models to anticipate health conditions, such as the need for surgeries. Additionally, we show that the proposed generative models are able to decode temporal patterns of interest (from the learned lattices) with acceptable completeness and precision levels, and with superior efficiency for voluminous repositories.", "title": "" }, { "docid": "f1e36a749d456326faeda90bc744b70d", "text": "In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.", "title": "" }, { "docid": "8230ddd7174a2562c0fe0f83b1bf7cf7", "text": "Metaphors are fundamental to creative thought and expression. Newly coined metaphors regularly infiltrate our collective vocabulary and gradually become familiar, but it is unclear how this shift from novel to conventionalized meaning happens in the brain. We investigated the neural career of metaphors in a functional magnetic resonance imaging study using extensively normed new metaphors and simulated the ordinary, gradual experience of metaphor conventionalization by manipulating participants' exposure to these metaphors. Results showed that the conventionalization of novel metaphors specifically tunes activity within bilateral inferior prefrontal cortex, left posterior middle temporal gyrus, and right postero-lateral occipital cortex. These results support theoretical accounts attributing a role for the right hemisphere in processing novel, low salience figurative meanings, but also show that conventionalization of metaphoric meaning is a bilaterally-mediated process. Metaphor conventionalization entails a decreased neural load within semantic networks rather than a hemispheric or regional shift across brain areas.", "title": "" }, { "docid": "e276068ede51c081c71a483b260e546c", "text": "The selection of hyper-parameters plays an important role to the performance of least-squares support vector machines (LS-SVMs). In this paper, a novel hyper-parameter selection method for LS-SVMs is presented based on the particle swarm optimization (PSO). The proposed method does not need any priori knowledge on the analytic property of the generalization performance measure and can be used to determine multiple hyper-parameters at the same time. The feasibility of this method is examined on benchmark data sets. Different kinds of kernel families are investigated by using the proposed method. Experimental results show that the best or quasi-best test performance could be obtained by using the scaling radial basis kernel function (SRBF) and RBF kernel functions, respectively. & 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "910a416dc736ec3566583c57123ac87c", "text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.", "title": "" }, { "docid": "5a1df710132da15c611c91a0550b1dbb", "text": "This chapter is concerned with sound and complete algorithms for testing satisfiability, i.e., algorithms that are guaranteed to terminate with a correct decision on the satisfiability/unsatisfiability of the given CNF. One can distinguish between a few approaches on which complete satisfiability algorithms have been based. The first approach is based on existential quantification, where one successively eliminates variables from the CNF without changing the status of its satisfiability. When all variables have been eliminated, the satisfiability test is then reduced into a simple test on a trivial CNF. The second approach appeals to sound and complete inference rules, applying them successively until either a contradiction is found (unsatisfiable CNF) or until the CNF is closed under these rules without finding a contradiction (satisfiable CNF). The third approach is based on systematic search in the space of truth assignments, and is marked by its modest space requirements. The last approach we will discuss is based on combining search and inference, leading to algorithms that currently underly most modern complete SAT solvers. We start in the next section by establishing some technical preliminaries that will be used throughout the chapter. We will follow by a treatment of algorithms that are based on existential quantification in Section 3.3 and then algorithms based on inference rules in Section 3.4. Algorithms based on search are treated in Section 3.5, while those based on the combination of search and inference are treated in Section 3.6. Note that some of the algorithms presented here could fall into more than one class, depending on the viewpoint used. Hence, the classification presented in Sections 3.3-3.6 is only one of the many possibilities.", "title": "" }, { "docid": "83e5f62d7f091260d4ae91c2d8f72d3d", "text": "Document recognition and retrieval technologies complement one another, providing improved access to increasingly large document collections. While recognition and retrieval of textual information is fairly mature, with wide-spread availability of optical character recognition and text-based search engines, recognition and retrieval of graphics such as images, figures, tables, diagrams, and mathematical expressions are in comparatively early stages of research. This paper surveys the state of the art in recognition and retrieval of mathematical expressions, organized around four key problems in math retrieval (query construction, normalization, indexing, and relevance feedback), and four key problems in math recognition (detecting expressions, detecting and classifying symbols, analyzing symbol layout, and constructing a representation of meaning). Of special interest is the machine learning problem of jointly optimizing the component algorithms in a math recognition system, and developing effective indexing, retrieval and relevance feedback algorithms for math retrieval. Another important open problem is developing user interfaces that seamlessly integrate recognition and retrieval. Activity in these important research areas is increasing, in part because math notation provides an excellent domain for studying problems common to many document and graphics recognition and retrieval applications, and also because mature applications will likely provide substantial benefits for education, research, and mathematical literacy.", "title": "" }, { "docid": "cc55fa9990cada5a26079251f9155eeb", "text": "Despite the tremendous concern in the insurance industry over insurance fraud by customers, the federal Insurance Fraud Prevention Act primarily targets internal fraud, or insurer fraud, in which criminal acts such as embezzlement could trigger an insurer’s insolvency, rather than fraud perpetrated by policyholders such as filing false or inflated claims—insurance fraud. Fraud committed against insurers by executives and employees is potentially one of the costliest issues facing the industry and attracts increasing attention from regulators, legislators, and the industry. One book includes reports on some 140 insurance executives convicted of major fraud in recent years. This study investigates whether insurers’ weapons against insurance fraud are also used effectively to combat insurer fraud. Several variables are tested—characteristics of perpetrators, schemes employed, and types of detection and investigation techniques utilized—to compare the characteristics of insurer fraud with those of insurance fraud and also with those in non-insurance industries. A detailed survey of 8,000 members of the Association of Certified Fraud Examiners provides the database; chisquare statistics, the Median (Brown-Mood) test, and the Kruskal-Wallis test were used to measure for significant differences. Most of the authors’ expectations were supported by the analysis, but some surprises were found, such as the relative ineffectiveness of insurer internal control systems at identifying employee fraud. Internal whistleblowing also was not as prevalent in the insurance industry as in other organizations. Insurers were more likely to prosecute their employees for fraud than were other industries, however.", "title": "" }, { "docid": "47897fc364551338fcaee76d71568e2e", "text": "As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.", "title": "" }, { "docid": "3fd6d0ef0240b2fdd2a9c76a023ecab6", "text": "In this work, an exponential spline method is developed and a nalyzed for approximating solutions of calculus of variati ons problems. The method uses a spline interpolant, which is con structed from exponential spline. It is proved to be secondrder convergent. Finally some illustrative examples are includ ed to demonstrate the applicability of the new technique. Nu merical results confirm the order of convergence predicted by the analysis.", "title": "" }, { "docid": "018018f9fa28cd4c24a1f3e6f29cb63e", "text": "In recent years, accidents in food quality & safety frequently occur, and more and more people have begun to think highly of food quality & safety and encourage food producers to be able to trace the origin of ingredients and process of production along the supply chain. With the development of IT, more and more practices have shown that the supply chain of agricultural products should rely on IT. The using of IT directly decides the degree of agricultural informatization and efficiency of agricultural supply chain management. In this paper, on the basis of introducing the meanings and characteristics of supply chain management and agricultural supply chain management, it also analyzes the information flow's attributes throughout the process of agricultural supply and the technological attributes of Internet of Things, finally, the designing method and architecture of integrated information platform of agricultural supply chain management based on internet of things was discussed in detail.", "title": "" }, { "docid": "44d5f8816285d81a731761ad00157e6f", "text": "Gunshot detection traditionally has been a task performed with acoustic signal processing. While this type of detection can give cities, civil services and training institutes a method to identify specific locations of gunshots, the nature of acoustic detection may not provide the fine-grained detection accuracy and sufficient metrics for performance assessment. If however you examine a different signature of a gunshot, the recoil, detection of the same event with accelerometers can provide you with persona and firearm model level detection abilities. The functionality of accelerometer sensors in wrist worn devices have increased significantly in recent time. From fitness trackers to smart watches, accelerometers have been put to use in various activity recognition and detection applications. In this paper, we design an approach that is able to account for the variations in firearm generated recoil, as recorded by a wrist worn accelerometer, and helps categorize the impulse forces. Our experiments show that not only can wrist worn accelerometers detect the differences in handgun rifle and shotgun gunshots, but the individual models of firearms can be distinguished from each other. The application of this framework could be extended in the future to include real time detection embedded in smart devices to assist in firearms training and also help in crime detection and prosecution.", "title": "" }, { "docid": "c323c25c05f2461fb0c0ef7cbf655eb4", "text": "While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AHNet) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for withinslice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results.", "title": "" }, { "docid": "241cd26632a394e5d922be12ca875fe1", "text": "Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator.", "title": "" }, { "docid": "e9199c0f3b08979c03e0c82399ac7160", "text": "Background: ADHD can have a negative impact on occupational performance of a child, interfering with ADLs, IADLs, education, leisure, and play. However, at this time, a cumulative review of evidence based occupational therapy interventions for children with ADHD do not exist. Purpose: The purpose of this scholarly project was to complete a systematic review of what occupational therapy interventions are effective for school-aged children with ADHD. Methods: An extensive systematic review for level T, II, or II research articles was completed using CINAHL and OT Search. Inclusion, exclusion, subject terms, and words or phrases were determined with assistance from the librarian at the Harley French Library at the University of North Dakota. Results: The systematic review yielded !3 evidence-based articles with interventions related to cognition, motor, sensory, and play. Upon completion of the systematic review, articles were categorized based upon an initial literature search understanding common occupational therapy interventions for children with ADHD. Specifically, level I, II, and III occupational therapy research is available for interventions addressing cognition, motor, sensory, and play. Conclusion: Implications for practice and education include the need for foundational and continuing education opportunities reflecting evidenced-based interventions for ADHD. Further research is needed to solidify best practices for children with ADHD including more rigorous studies across interventions.", "title": "" }, { "docid": "347e7b80b2b0b5cd5f0736d62fa022ae", "text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.", "title": "" }, { "docid": "1aa39f265d476fca4c54af341b6f2bde", "text": "Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat surprisingly, we find that DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Our conjecture is that this phenomenon occurs because these explanations are dominated by the lower level features of a DNN, and that a DNN’s architecture provides a strong prior which significantly affects the representations learned at these lower layers.", "title": "" }, { "docid": "3d2e170b4cd31d0e1a28c968f0b75cf6", "text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.", "title": "" }, { "docid": "4b4dc34feba176a30bced5b7dbe4fe7b", "text": "The Bitcoin ecosystem has suffered frequent thefts and losses affecting both businesses and individuals. The insider threat faced by a business is particularly serious. Due to the irreversibility, automation, and pseudonymity of transactions, Bitcoin currently lacks support for the sophisticated internal control systems deployed by modern businesses to deter fraud. We seek to bridge this gap. We show that a thresholdsignature scheme compatible with Bitcoin’s ECDSA signatures can be used to enforce complex yet useful security policies including: (1) shared control of a wallet, (2) secure bookkeeping, a Bitcoin-specific form of accountability, (3) secure delegation of authority, and (4) two-factor security for personal wallets.", "title": "" } ]
scidocsrr
5bec50991121b27bce0aabaecc808fe4
Improving Query Expansion Using WordNet
[ { "docid": "28b2bbcfb8960ff40f2fe456a5b00729", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" } ]
[ { "docid": "ca561ab257c495cd2e3e26db0d78cab7", "text": "Congenital cystic eye (anophthalmia with cyst) is an extremely rare anomaly discovered at birth with few reported cases in the literature, resulting from partial or complete failure during invagination of the primary optic vesicle during fetal development. Herein we present the radiographic, ultrasound, and magnetic resonance imaging findings of a unique case of congenital cystic eye associated with dermal appendages and advanced intracranial congenital anomalies in a 3-month-old boy.", "title": "" }, { "docid": "b974a8d8b298bfde540abc451f76bf90", "text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.", "title": "" }, { "docid": "24e10d8e12d8b3c618f88f1f0d33985d", "text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.", "title": "" }, { "docid": "69ad93c7b6224321d69456c23a4185ce", "text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.", "title": "" }, { "docid": "82afc38c66581ca44787fdff62fd479e", "text": "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called \"BoxSup\", produces competitive results (e.g., 62.0% mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8% mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT [26].", "title": "" }, { "docid": "03a2b9ebdac78ca3a6c808f87f73c26b", "text": "OBJECTIVE\nPost-traumatic stress disorder (PTSD) has major public health significance. Evidence that PTSD may be associated with premature senescence (early or accelerated aging) would have major implications for quality of life and healthcare policy. We conducted a comprehensive review of published empirical studies relevant to early aging in PTSD.\n\n\nMETHOD\nOur search included the PubMed, PsycINFO, and PILOTS databases for empirical reports published since the year 2000 relevant to early senescence and PTSD, including: 1) biomarkers of senescence (leukocyte telomere length [LTL] and pro-inflammatory markers), 2) prevalence of senescence-associated medical conditions, and 3) mortality rates.\n\n\nRESULTS\nAll six studies examining LTL indicated reduced LTL in PTSD (pooled Cohen's d = 0.76). We also found consistent evidence of increased pro-inflammatory markers in PTSD (mean Cohen's ds), including C-reactive protein = 0.18, Interleukin-1 beta = 0.44, Interleukin-6 = 0.78, and tumor necrosis factor alpha = 0.81. The majority of reviewed studies also indicated increased medical comorbidity among several targeted conditions known to be associated with normal aging, including cardiovascular disease, type 2 diabetes mellitus, gastrointestinal ulcer disease, and dementia. We also found seven of 10 studies indicated PTSD to be associated with earlier mortality (average hazard ratio: 1.29).\n\n\nCONCLUSION\nIn short, evidence from multiple lines of investigation suggests that PTSD may be associated with a phenotype of accelerated senescence. Further research is critical to understand the nature of this association. There may be a need to re-conceptualize PTSD beyond the boundaries of mental illness, and instead as a full systemic disorder.", "title": "" }, { "docid": "6c3f80b453d51e364eca52656ed54e62", "text": "Despite substantial recent research activity related to continuous delivery and deployment (CD), there has not yet been a systematic, empirical study on how the practices often associated with continuous deployment have found their way into the broader software industry. This raises the question to what extent our knowledge of the area is dominated by the peculiarities of a small number of industrial leaders, such as Facebook. To address this issue, we conducted a mixed-method empirical study, consisting of a pre-study on literature, qualitative interviews with 20 software developers or release engineers with heterogeneous backgrounds, and a Web-based quantitative survey that attracted 187 complete responses. A major trend in the results of our study is that architectural issues are currently one of the main barriers for CD adoption. Further, feature toggles as an implementation technique for partial rollouts lead to unwanted complexity, and require research on better abstractions and modelling techniques for runtime variability. Finally, we conclude that practitioners are in need for more principled approaches to release decision making, e.g., which features to conduct A/B tests on, or which metrics to evaluate.", "title": "" }, { "docid": "34d8bd1dd1bbe263f04433a6bf7d1b29", "text": "algorithms for image processing and computer vision algorithms for image processing and computer vision exploring computer vision and image processing algorithms free ebooks algorithms for image processing and computer parallel algorithms for digital image processing computer algorithms for image processing and computer vision pdf algorithms for image processing and computer vision computer vision: algorithms and applications brown gpu algorithms for image processing and computer vision high-end computer vision algorithms image processing handbook of computer vision algorithms in image algebra the university of cs 4487/9587 algorithms for image analysis an analysis of rigid image alignment computer vision computer vision with matlab massachusetts institute of handbook of computer vision algorithms in image algebra tips and tricks for image processing and computer vision limitations of human vision what is computer vision algorithms for image processing and computer vision gbv algorithms for image processing and computer vision. 2nd computer vision for nanoscale imaging algorithms for image processing and computer vision a survey of distributed computer vision algorithms computer vision: algorithms and applications sci home algorithms for image processing and computer vision ebook engineering of computer vision algorithms using algorithms for image processing and computer vision by j real-time algorithms: prom signal processing to computer expectationmaximization algorithms for image processing automated techniques for detection and recognition of algorithms for image processing and computer vision dictionary of computer vision and image processing implementing video image processing algorithms on fpga open source libraries for image processing computer vision and image processing: a practical approach computer vision i algorithms and applications: image algorithms for image processing and computer vision algorithms for image processing and computer vision j. r", "title": "" }, { "docid": "f3bda47434c649f6b8fad89199ff5987", "text": "Structural health monitoring (SHM) of civil infrastructure using wireless smart sensor networks (WSSNs) has received significant public attention in recent years. The benefits of WSSNs are that they are low-cost, easy to install, and provide effective data management via on-board computation. This paper reports on the deployment and evaluation of a state-of-the-art WSSN on the new Jindo Bridge, a cable-stayed bridge in South Korea with a 344-m main span and two 70-m side spans. The central components of the WSSN deployment are the Imote2 smart sensor platforms, a custom-designed multimetric sensor boards, base stations, and software provided by the Illinois Structural Health Monitoring Project (ISHMP) Services Toolsuite. In total, 70 sensor nodes and two base stations have been deployed to monitor the bridge using an autonomous SHM application with excessive wind and vibration triggering the system to initiate monitoring. Additionally, the performance of the system is evaluated in terms of hardware durability, software stability, power consumption and energy harvesting capabilities. The Jindo Bridge SHM system constitutes the largest deployment of wireless smart sensors for civil infrastructure monitoring to date. This deployment demonstrates the strong potential of WSSNs for monitoring of large scale civil infrastructure.", "title": "" }, { "docid": "0ecb65da4effb562bfa29d06769b1a4c", "text": "A new algorithm for testing primality is presented. The algorithm is distinguishable from the lovely algorithms of Solvay and Strassen [36], Miller [27] and Rabin [32] in that its assertions of primality are certain (i.e., provable from Peano's axioms) rather than dependent on unproven hypothesis (Miller) or probability (Solovay-Strassen, Rabin). An argument is presented which suggests that the algorithm runs within time c1ln(n)c2ln(ln(ln(n))) where n is the input, and C1, c2 constants independent of n. Unfortunately no rigorous proof of this running time is yet available.", "title": "" }, { "docid": "197dfd6fdcb600c2dec6aefcbf8dfd1f", "text": "In this paper, We propose a formalized method to improve the performance of Contextual Anomaly Detection (CAD) for detecting stock market manipulation using Big Data techniques. The method aims to improve the CAD algorithm by capturing the expected behaviour of stocks through sentiment analysis of tweets about stocks. The extracted insights are aggregated per day for each stock and transformed to a time series. The time series is used to eliminate false positives from anomalies that are detected by CAD. We present a case study and explore developing sentiment analysis models to improve anomaly detection in the stock market. The experimental results confirm the proposed method is effective in improving CAD through removing irrelevant anomalies by correctly identifying 28% of false positives.", "title": "" }, { "docid": "e0ec89c103aedb1d04fbc5892df288a8", "text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.", "title": "" }, { "docid": "705ba6bc49669ba22ff2408a3f9a984c", "text": "Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient’s medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.", "title": "" }, { "docid": "055e41fd6ace430ea9593a30e3dd02d2", "text": "Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.", "title": "" }, { "docid": "d70235bc7fb94e1e3d1d301f8d1835cb", "text": "How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron–electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.", "title": "" }, { "docid": "e14f1292fd3d0f744f041219217f1e15", "text": "Previous research highlights how adept people are at emotional recovery after rejection, but less research has examined factors that can prevent full recovery. In five studies, we investigate how changing one's self-definition in response to rejection causes more lasting damage. We demonstrate that people who endorse an entity theory of personality (i.e., personality cannot be changed) report alterations in their self-definitions when reflecting on past rejections (Studies 1, 2, and 3) or imagining novel rejection experiences (Studies 4 and 5). Further, these changes in self-definition hinder post-rejection recovery, causing individuals to feel haunted by their past, that is, to fear the recurrence of rejection and to experience lingering negative affect from the rejection. Thus, beliefs that prompt people to tie experiences of rejection to self-definition cause rejection's impact to linger.", "title": "" }, { "docid": "48109c78ad73b1973be3f20a7e6acf26", "text": "Clustering by integrating multiview representations has become a crucial issue for knowledge discovery in heterogeneous environments. However, most prior approaches assume that the multiple representations share the same dimension, limiting their applicability to homogeneous environments. In this paper, we present a novel tensor-based framework for integrating heterogeneous multiview data in the context of spectral clustering. Our framework includes two novel formulations; that is multiview clustering based on the integration of the Frobenius-norm objective function (MC-FR-OI) and that based on matrix integration in the Frobenius-norm objective function (MC-FR-MI). We show that the solutions for both formulations can be computed by tensor decompositions. We evaluated our methods on synthetic data and two real-world data sets in comparison with baseline methods. Experimental results demonstrate that the proposed formulations are effective in integrating multiview data in heterogeneous environments.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "0e98010ded0712ab0e2f78af0a476c86", "text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.", "title": "" }, { "docid": "4023c95464a842277e4dc62b117de8d0", "text": "Many complex spike cells in the hippocampus of the freely moving rat have as their primary correlate the animal's location in an environment (place cells). In contrast, the hippocampal electroencephalograph theta pattern of rhythmical waves (7-12 Hz) is better correlated with a class of movements that change the rat's location in an environment. During movement through the place field, the complex spike cells often fire in a bursting pattern with an interburst frequency in the same range as the concurrent electroencephalograph theta. The present study examined the phase of the theta wave at which the place cells fired. It was found that firing consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This precession of the phase ranged from 100 degrees to 355 degrees in different cells. The effect appeared to be due to the fact that individual cells had a higher interburst rate than the theta frequency. The phase was highly correlated with spatial location and less well correlated with temporal aspects of behavior, such as the time after place field entry. These results have implications for several aspects of hippocampal function. First, by using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding. Second, the characteristics of the phase shift constrain the models that define the construction of place fields. Third, the results restrict the temporal and spatial circumstances under which synapses in the hippocampus could be modified.", "title": "" } ]
scidocsrr
759297e043f2ea5a094c905059075aa0
A Survey on Control of Hydraulic Robotic Manipulators With Projection to Future Trends
[ { "docid": "457ea53f0a303e8eba8847422ef61e5a", "text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and", "title": "" } ]
[ { "docid": "bfcb1fd882a328daab503a7dd6b6d0a6", "text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.", "title": "" }, { "docid": "e08e42c8f146e6a74213643e306446c6", "text": "Disclaimer The opinions and positions expressed in this practice guide are the authors' and do not necessarily represent the opinions and positions of the Institute of Education Sciences or the U.S. Department of Education. This practice guide should be reviewed and applied according to the specific needs of the educators and education agencies using it and with full realization that it represents only one approach that might be taken, based on the research that was available at the time of publication. This practice guide should be used as a tool to assist in decision-making rather than as a \" cookbook. \" Any references within the document to specific education products are illustrative and do not imply endorsement of these products to the exclusion of other products that are not referenced. Alternative Formats On request, this publication can be made available in alternative formats, such as Braille, large print, audiotape, or computer diskette. For more information, call the Alternative Format Center at (202) 205-8113.", "title": "" }, { "docid": "548ca7ecd778bc64e4a3812acd73dcfb", "text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.", "title": "" }, { "docid": "5d91c93728632586a63634c941420c64", "text": "A new method for analyzing analog single-event transient (ASET) data has been developed. The approach allows for quantitative error calculations, given device failure thresholds. The method is described and employed in the analysis of an OP-27 op-amp.", "title": "" }, { "docid": "8d0b7e0315d0e8a7eba9876d7c08be69", "text": "We report on a case of conjoined twinning (CT) consistent with fusion of two embryos followed by resorption of the cranial half of one of them, resulting in a normal male baby with the lower half of a male parasitic twin fused to his chest. Fluorescent in situ hybridization (FISH) studies suggested that the parasitic twin was male, and DNA typing studies demonstrated dizygosity. Although incomplete fission is the usual explanation for conjoined twins, the unusual perpendicular orientation of the parasite to the autosite supports a mechanism observed in mares in which early fusion of two embryos is followed by resorption due to compromised embryonic polarity.", "title": "" }, { "docid": "6c1d3eb9d3e39b25f32b77942b04d165", "text": "The aim of this study is to investigate the factors influencing the consumer acceptance of mobile banking in Bangladesh. The demographic, attitudinal, and behavioural characteristics of mobile bank users were examined. 292 respondents from seven major mobile financial service users of different mobile network operators participated in the consumer survey. Infrastructural facility, selfcontrol, social influence, perceived risk, ease of use, need for interaction, perceived usefulness, and customer service were found to influence consumer attitudes towards mobile banking services. The infrastructural facility of updated user friendly technology and its availability was found to be the most important factor that motivated consumers’ attitudes in Bangladesh towards mobile banking. The sample size was not necessarily representative of the Bangladeshi population as a whole as it ignored large rural population. This study identified two additional factors i.e. infrastructural facility and customer service relevant to mobile banking that were absent in previous researches. By addressing the concerns of and benefits sought by the consumers, marketers can create positive attractions and policy makers can set regulations for the expansion of mobile banking services in Bangladesh. This study offers an insight into mobile banking in Bangladesh focusing influencing factors, which has not previously been investigated.", "title": "" }, { "docid": "4c46d2cbc52dbc2780e002651d88f3a7", "text": "Processing in memory (PIM) implemented via 3D die stacking has been recently proposed to reduce the widening gap between processor and memory performance. By moving computation that demands high memory bandwidth to the base logic die of a 3D memory stack, PIM promises significant improvements in energy efficiency. However, the vision of PIM implemented via 3D die stacking could potentially be derailed if the processor(s) raise the stack’s temperature to unacceptable levels. In this paper, we study the thermal constraints for PIM across different processor organizations and cooling solutions and show the range of designs that are viable under different conditions. We also demonstrate that PIM is feasible even with low-end, fanless cooling solutions. We believe these results help alleviate PIM thermal feasibility concerns and identify viable design points, thereby encouraging further exploration and research in novel PIM architectures, technologies, and use cases.", "title": "" }, { "docid": "def6cd29f4679acdc7d944d9a7e734e4", "text": "Question Answering (QA) is one of the most challenging and crucial tasks in Natural Language Processing (NLP) that has a wide range of applications in various domains, such as information retrieval and entity extraction. Traditional methods involve linguistically based NLP techniques, and recent researchers apply Deep Learning on this task and have achieved promising result. In this paper, we combined Dynamic Coattention Network (DCN) [1] and bilateral multiperspective matching (BiMPM) model [2], achieved an F1 score of 63.8% and exact match (EM) of 52.3% on test set.", "title": "" }, { "docid": "1e55f802a805ca93dd02bf5709aa4e4b", "text": "BACKGROUND\nThe recombinant BCG ΔureC::hly (rBCG) vaccine candidate induces improved protection against tuberculosis over parental BCG (pBCG) in preclinical studies and has successfully completed a phase 2a clinical trial. However, the mechanisms responsible for the superior vaccine efficacy of rBCG are still incompletely understood. Here, we investigated the underlying biological mechanisms elicited by the rBCG vaccine candidate relevant to its protective efficacy.\n\n\nMETHODS\nTHP-1 macrophages were infected with pBCG or rBCG, and inflammasome activation and autophagy were evaluated. In addition, mice were vaccinated with pBCG or rBCG, and gene expression in the draining lymph nodes was analyzed by microarray at day 1 after vaccination.\n\n\nRESULTS\nBCG-derived DNA was detected in the cytosol of rBCG-infected macrophages. rBCG infection was associated with enhanced absent in melanoma 2 (AIM2) inflammasome activation, increased activation of caspases and production of interleukin (IL)-1β and IL-18, as well as induction of AIM2-dependent and stimulator of interferon genes (STING)-dependent autophagy. Similarly, mice vaccinated with rBCG showed early increased expression of Il-1β, Il-18, and Tmem173 (transmembrane protein 173; also known as STING).\n\n\nCONCLUSIONS\nrBCG stimulates AIM2 inflammasome activation and autophagy, suggesting that these cell-autonomous functions should be exploited for improved vaccine design.", "title": "" }, { "docid": "a05b6f2671e32f1f6f2d5b5f9d8200dd", "text": "This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.", "title": "" }, { "docid": "a4d75c6c95d151d83396d5a88594de51", "text": "In this paper a novel aerial manipulation system is proposed. The mechanical structure of the system, the number of thrusters and their geometry will be derived from technical optimization problems. The aforementioned problems are defined by taking into consideration the desired actuation forces and torques applied to the end-effector of the system. The framework of the proposed system is designed in a CAD Package in order to evaluate the system parameter values. Following this, the kinematic and dynamic models are developed and an adaptive backstepping controller is designed aiming to control the exact position and orientation of the end-effector in the Cartesian space. Finally, the performance of the system is demonstrated through a simulation study, where a manipulation task scenario is investigated.", "title": "" }, { "docid": "612cd1b5883fdb09dd9ace00174eb4fa", "text": "Localization in indoor environment poses a fundamental challenge in ubiquitous computing compared to its well-established GPS-based outdoor environment counterpart. This study investigated the feasibility of a WiFi-based indoor positioning system to localize elderly in an elderly center focusing on their orientation. The fingerprinting method of Received Signal Strength Indication (RSSI) from WiFi Access Points (AP) has been employed to discriminate and uniquely identify a position. The discrimination process of the reference points with its orientation have been analyzed with 0.9, 1.8, and 2.7 meter resolution. The experimental result shows that the WiFi-based RSSI fingerprinting method can discriminate the location and orientation of a user within 1.8 meter resolution.", "title": "" }, { "docid": "6ba73f29a71cda57450f1838ef012356", "text": "Addressing the challenges of feeding the burgeoning world population with limited resources requires innovation in sustainable, efficient farming. The practice of precision agriculture offers many benefits towards addressing these challenges, such as improved yield and efficient use of such resources as water, fertilizer and pesticides. We describe the design and development of a light-weight, multi-spectral 3D imaging device that can be used for automated monitoring in precision agriculture. The sensor suite consists of a laser range scanner, multi-spectral cameras, a thermal imaging camera, and navigational sensors. We present techniques to extract four key data products - plant morphology, canopy volume, leaf area index, and fruit counts - using the sensor suite. We demonstrate its use with two systems: multi-rotor micro aerial vehicles and on a human-carried, shoulder-mounted harness. We show results of field experiments conducted in collaboration with growers and agronomists in vineyards, apple orchards and orange groves.", "title": "" }, { "docid": "83d65487b8a929ef771e10ccabc61baf", "text": "There has been an increasing interest in big data and big data security with the development of network technology and cloud computing. However, big data is not an entirely new technology but an extension of data mining. In this paper, we describe the background of big data, data mining and big data features, and propose attribute selection methodology for protecting the value of big data. Extracting valuable information is the main goal of analyzing big data which need to be protected. Therefore, relevance between attributes of a dataset is a very important element for big data analysis. We focus on two things. Firstly, attribute relevance in big data is a key element for extracting information. In this perspective, we studied on how to secure a big data through protecting valuable information inside. Secondly, it is impossible to protect all big data and its attributes. We consider big data as a single object which has its own attributes. We assume that a attribute which have a higher relevance is more important than other attributes.", "title": "" }, { "docid": "badb04b676d3dab31024e8033fc8aec4", "text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.", "title": "" }, { "docid": "c410b6cd3f343fc8b8c21e23e58013cd", "text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.", "title": "" }, { "docid": "590cf6884af6223ce4e827ba2fe18209", "text": "1. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. 2. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. 3. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. 4. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. 5. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. 6. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). 7. The wide range of cell types amenable to giga-seal formation is discussed. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). The wide range of cell types amenable to giga-seal formation is discussed.", "title": "" }, { "docid": "336d5a790ffa2d25150c4be5817693b4", "text": "Cryptarithmetic is a class of constraint satisfaction problems which includes making mathematical relations between meaningful words using simple arithmetic operators like ‘plus’ in a way that the result is conceptually true, and assigning digits to the letters of these words and generating numbers in order to make correct arithmetic operations as well. A simple way to solve such problems is by depth first search (DFS) algorithm which has a big search space even for quite small problems. In this paper we proposed a solution to this problem with genetic algorithm and then optimized it by using parallelism. We also showed that the algorithm reaches a solution faster and in a smaller number of iterations than similar algorithms.", "title": "" }, { "docid": "38e7a36e4417bff60f9ae0dbb7aaf136", "text": "Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture", "title": "" } ]
scidocsrr
9af7e479290f28d55b608802eb122e44
Ask the locals: Multi-way local pooling for image recognition
[ { "docid": "7db9cf29dd676fa3df5a2e0e95842b6e", "text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" } ]
[ { "docid": "60e94f9a6731e1a148e05aa0f9a31683", "text": "Bright light therapy for seasonal affective disorder (SAD) has been investigated and applied for over 20 years. Physicians and clinicians are increasingly confident that bright light therapy is a potent, specifically active, nonpharmaceutical treatment modality. Indeed, the domain of light treatment is moving beyond SAD, to nonseasonal depression (unipolar and bipolar), seasonal flare-ups of bulimia nervosa, circadian sleep phase disorders, and more. Light therapy is simple to deliver to outpatients and inpatients alike, although the optimum dosing of light and treatment time of day requires individual adjustment. The side-effect profile is favorable in comparison with medications, although the clinician must remain vigilant about emergent hypomania and autonomic hyperactivation, especially during the first few days of treatment. Importantly, light therapy provides a compatible adjunct to antidepressant medication, which can result in accelerated improvement and fewer residual symptoms.", "title": "" }, { "docid": "237084abc919cc10b51c4c41aff0ddc6", "text": "In multichannel environments, consumers can move easily among different channels. They engage in cross-channel free-riding when they use one retailer’s channel to obtain information or evaluate products and then switch to another retailer’s channel to complete the purchase. Cross-channel free-riding erodes profits and is one of the most important issues that firms face in the multichannel era. The current study focuses on the most popular type of cross-channel free-riding: searching for product information in an online store and then purchasing in another brick-and-mortar store. It explores antecedents that may contribute to consumer switching behaviors through a questionnaire focused on cross-channel free-riding behavior. The empirical results reveal that when consumers perceive more multichannel self-efficacy, they engage in more cross-channel free-riding behavior. Perceived service quality of competitors’ offline store and the reduced risk in the brick-and-mortar channel influence the attractiveness of this behavior and increase cross-channel free-riding intentions. By increasing within-firm lock-in levels, firms can reduce consumers’ cross-channel free-riding intentions. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a48a6694beefec8fc57a4468c7caf084", "text": "A capacity-achieving scheme based on polar codes is proposed for reliable communication over multi-channels which can be directly applied to bit-interleaved coded modulation schemes. We start by reviewing the ground-breaking work of polar codes and then discuss our proposed scheme. Instead of encoding separately across the individual underlying channels, which requires multiple encoders and decoders, we take advantage of the recursive structure of polar codes to construct a unified scheme with a single encoder and decoder that can be used over the multi-channels. We prove that the scheme achieves the capacity over this multi-channel. Numerical analysis and simulation results for BICM channels at finite block lengths shows a considerable improvement in the probability of error comparing to a conventional separated scheme.", "title": "" }, { "docid": "8d7cb4e8fd243f3cd091c1866a18fc5c", "text": "We develop graphene-based devices fabricated by alternating current dielectrophoresis (ac-DEP) for highly sensitive nitric oxide (NO) gas detection. The novel device comprises the sensitive channels of palladium-decorated reduced graphene oxide (Pd-RGO) and the electrodes covered with chemical vapor deposition (CVD)-grown graphene. The highly sensitive, recoverable, and reliable detection of NO gas ranging from 2 to 420 ppb with response time of several hundred seconds has been achieved at room temperature. The facile and scalable route for high performance suggests a promising application of graphene devices toward the human exhaled NO and environmental pollutant detections.", "title": "" }, { "docid": "fe431864a5a244b7e7f6d4fd587b3fd8", "text": "ERP implementation issues have been given much attention since two decades ago due to its low implementation success. Nearly 90 percent of ERP implementations are late or over budget [16] and the success rate with ERP implementation is about 33%. In China, the success rate of implementing ERP systems is extremely low at 10% [28] which is much lower than that in West countries. This study attempts to study critical success factors affecting enterprise resource planning (ERP) systems implementation success in China with focus on both generic and unique factors. User satisfaction and White’s ABCD classification method are used to judge whether an ERP system implementation is a success or a failure. Survey methodology and structural equation modeling technique of PLS-Graph are used to collect and analyze data. Discussions on the results of data analysis are made.", "title": "" }, { "docid": "90738b84c4db0a267c7213c923368e6a", "text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.", "title": "" }, { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "d2c13ae44fec1de36cf2a11e0328f9e6", "text": "In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-per-use packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia\\footnote{\\url{ganglia.sourceforge.net/}} which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.", "title": "" }, { "docid": "faa818a0208ac491c42373810280b4f4", "text": "The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https://github.com/myfavouritekk/T-CNN.", "title": "" }, { "docid": "2079bd806c3b6b9de28b0a3d158f63f3", "text": "Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this “direct loss” objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.", "title": "" }, { "docid": "6be74aa3f89b9e6944d8ffeb499fb4fa", "text": "Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.Optimistic replication deploys algorithms not seen in traditional “pessimistic” systems. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen, and reaches agreement on the final contents incrementally.We explore the solution space for optimistic replication algorithms. This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.", "title": "" }, { "docid": "2314d6d1c294c9d3753404ebe123edd3", "text": "The magnification of mobile devices in everyday life prompts the idea that these devices will increasingly have evidential value in criminal cases. While this may have been assumed in digital forensics communities, there has been no empirical evidence to support this idea. This research investigates the extent to which mobile phones are being used in criminal proceedings in the United Kingdom thorough the examination of appeal judgments retrieved from the Westlaw, Lexis Nexis and British and Irish Legal Information Institute (BAILII) legal databases. The research identified 537 relevant appeal cases from a dataset of 12,763 criminal cases referring to mobile phones for a period ranging from 1st of January, 2006 to 31st of July, 2011. The empirical analysis indicates that mobile phone evidence is rising over time with some correlations to particular crimes.", "title": "" }, { "docid": "8eb31344917df5420df8912cdd4966d1", "text": "This contribution presents a precise localization method for advanced driver assistance systems. A Maximally Stable Extremal Region (MSER) detector is used to extract bright areas, i.e. lane markings, from grayscale camera images. Furthermore, this algorithm is also used to extract features from a laser scanner grid map. These regions are automatically stored as landmarks in a geospatial data base during a map creation phase. A particle filter is then employed to perform the pose estimation. For the weight update of the filter the similarity between the set of online MSER detections and the set of mapped landmarks within the field of view is evaluated. Hereby, a two stage sensor fusion is carried out. First, in order to have a large field of view available, not only a forward facing camera but also a rearward facing camera is used and the detections from both sensors are fused. Secondly, the weight update also integrates the features detected from the laser grid map, which is created using measurements of three laser scanners. The performance of the proposed algorithm is evaluated on a 7 km long stretch of a rural road. The evaluation reveals that a relatively good position estimation and a very accurate orientation estimation (0.01 deg ± 0.22 deg) can be achieved using the presented localization method. In addition, an evaluation of the localization performance based only on each of the respective kinds of MSER features is provided in this contribution and compared to the combined approach.", "title": "" }, { "docid": "0836e5d45582b0a0eec78234776aa419", "text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.", "title": "" }, { "docid": "b5205513c021eabf6798c568759799f6", "text": "Fillers belong to the most frequently used beautifying products. They are generally well tolerated, but any one of them may occasionally produce adverse side effects. Adverse effects usually last as long as the filler is in the skin, which means that short-lived fillers have short-term side effects and permanent fillers may induce life-long adverse effects. The main goal is to prevent them, however, this is not always possible. Utmost care has to be given to the prevention of infections and the injection technique has to be perfect. Treatment of adverse effects is often with hyaluronidase or steroid injections and in some cases together with 5-fluorouracil plus allopurinol orally. Histological examination of biopsy specimens often helps to identify the responsible filler allowing a specific treatment to be adapted.", "title": "" }, { "docid": "dab93692fe600521aa7ced1c2daf3221", "text": "This paper presents our investigation into a novel ultrahigh-frequency (UHF) radio frequency identification (RFID) multipolarized reader antenna based on a pair of symmetrical meandering open-ended microstrip lines for near-field applications. The near-field and multipolarization operation is achieved by introducing a 90° phase shift between the currents flowing along the opposite side of two branches. The proposed antenna is shown to generate a uniform and strong electric field in its near-field region within a reading volume: <inline-formula> <tex-math notation=\"LaTeX\">$450~\\text {mm} \\times 450~\\text {mm} \\times 350$ </tex-math></inline-formula> mm (<inline-formula> <tex-math notation=\"LaTeX\">$\\text {width} \\times \\text {length} \\times \\text {height}$ </tex-math></inline-formula>). The simulated and measured impedance bandwidths (−10 dB) agree very well, ranging from 825 to 965 MHz and covering the UHF RFID standard. In addition, it exhibits a low far-field gain, avoiding to misreading the tags outside the near-field region. The fabricated antenna was fully tested with multiple tag antennas that are placed in different orientations and even in a conveyor system, demonstrating a 100% reading rate of arbitrarily oriented tags within the reading zone.", "title": "" }, { "docid": "a5abd5f11b83afdccbdfc190b8351b07", "text": "Named Data Networking (NDN) is a recently proposed general- purpose network architecture that leverages the strengths of Internet architecture while aiming to address its weaknesses. NDN names packets rather than end-hosts, and most of NDN's characteristics are a consequence of this fact. In this paper, we focus on the packet forwarding model of NDN. Each packet has a unique name which is used to make forwarding decisions in the network. NDN forwarding differs substantially from that in IP; namely, NDN forwards based on variable-length names and has a read-write data plane. Designing and evaluating a scalable NDN forwarding node architecture is a major effort within the overall NDN research agenda. In this paper, we present the concepts, issues and principles of scalable NDN forwarding plane design. The essential function of NDN forwarding plane is fast name lookup. By studying the performance of the NDN reference implementation, known as CCNx, and simplifying its forwarding structure, we identify three key issues in the design of a scalable NDN forwarding plane: 1) exact string matching with fast updates, 2) longest prefix matching for variable-length and unbounded names and 3) large- scale flow maintenance. We also present five forwarding plane design principles for achieving 1 Gbps throughput in software implementation and 10 Gbps with hardware acceleration.", "title": "" }, { "docid": "ddf8bc756d2b2dcfddd107ac972297a3", "text": "This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.", "title": "" }, { "docid": "fcf88ca7ca7ae03e7feea2ec7a5181a5", "text": "Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and highresolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0% in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9% mean IoU, which outperforms the previous state-of-the-art results.", "title": "" }, { "docid": "3c9be4272d57966660c74857a68a70d3", "text": "Due to the recent surge in end-users demands, value-added video services (e.g. in-stream video advertisements) need to be provisioned in a cost-efficient and agile manner in Content Delivery Networks (CDNs). Network Function Virtualization (NFV) is an emerging technology that aims to reduce costs and bring agility by decoupling network functions from the underlying hardware. It is often used in combination with Software Defined Network (SDN), a technology to decouple control and data planes. This paper proposes an NFV and SDN-based architecture for a cost-efficient and agile provisioning of value-added video services in CDNs. In the proposed architecture, the application-level middleboxes that enable value-added video services (e.g. mixer, compressor) are provisioned as Virtual Network Functions (VNFs) and chained using application-level SDN switches. HTTP technology is used as the pillar of the implementation architecture. We have built a prototype and deployed it in an OPNFV test lab and in SAVI, a Canadian distributed test bed for future Internet applications. The performance is also evaluated.", "title": "" } ]
scidocsrr
a8ae5e3d130c83ee70db8ec66a378b25
A mixed-method empirical study of Function-as-a-Service software development in industrial practice
[ { "docid": "b5c8d34b75dbbfdeb666fd76ef524be7", "text": "Systematic Literature Reviews (SLR) may not provide insight into the \"state of the practice\" in SE, as they do not typically include the \"grey\" (non-published) literature. A Multivocal Literature Review (MLR) is a form of a SLR which includes grey literature in addition to the published (formal) literature. Only a few MLRs have been published in SE so far. We aim at raising the awareness for MLRs in SE by addressing two research questions (RQs): (1) What types of knowledge are missed when a SLR does not include the multivocal literature in a SE field? and (2) What do we, as a community, gain when we include the multivocal literature and conduct MLRs? To answer these RQs, we sample a few example SLRs and MLRs and identify the missing and the gained knowledge due to excluding or including the grey literature. We find that (1) grey literature can give substantial benefits in certain areas of SE, and that (2) the inclusion of grey literature brings forward certain challenges as evidence in them is often experience and opinion based. Given these conflicting viewpoints, the authors are planning to prepare systematic guidelines for performing MLRs in SE.", "title": "" }, { "docid": "1b78650b979b0043eeb3e7478a263846", "text": "Our solutions was launched using a want to function as a full on-line digital local library that gives use of many PDF guide catalog. You may find many different types of e-guide as well as other literatures from my papers data bank. Specific popular topics that spread out on our catalog are famous books, answer key, assessment test questions and answer, guideline paper, training guideline, quiz test, consumer guide, consumer guidance, service instructions, restoration handbook, and so forth.", "title": "" } ]
[ { "docid": "c197198ca45acec2575d5be26fc61f36", "text": "General systems theory has been proposed as a basis for the unification of science. The open systems model has stimulated many new conceptualizations in organization theory and management practice. However, experience in utilizing these concepts suggests many unresolved dilemmas. Contingency views represent a step toward less abstraction, more explicit patterns of relationships, and more applicable theory. Sophistication will come when we have a more complete understanding of organizations as total systems (configurations of subsystems) so that we can prescribe more appropriate organizational designs and managerial systems. Ultimately, organization theory should serve as the foundation for more effective management practice.", "title": "" }, { "docid": "83a265b44df990c48c04319327bcb4e8", "text": "This technical report accompanies the article “Optimistic Bayesian Sampling in Contextual-Bandit Problems” by B.C. May, N. Korda, A. Lee, and D.S. Leslie [3].", "title": "" }, { "docid": "aaa1ed7c041123e0f7a2f948fdbd9e1a", "text": "The present study evaluated the venous anatomy of the craniocervical junction, focusing on the suboccipital cavernous sinus (SCS), a vertebral venous plexus surrounding the horizontal portion of the vertebral artery at the skull base. MR imaging was reviewed to clarify the venous anatomy of the SCS in 33 patients. Multiplanar reconstruction MR images were obtained using contrast-enhanced three-dimensional fast spoiled gradient–recalled acquisition in the steady state (3-D fast SPGR) with fat suppression. Connections with the SCS were evaluated for the following venous structures: anterior condylar vein (ACV); posterior condylar vein (PCV); lateral condylar vein (LCV); vertebral artery venous plexus (VAVP); and anterior internal vertebral venous plexus (AVVP). The SCS connected with the ACV superomedially, with the VAVP inferolaterally, and with the AVVP medially. The LCV connected with the external orifice of the ACV and superoanterior aspect of the SCS. The PCV connected with the posteromedial aspect of the jugular bulb and superoposterior aspect of the SCS. The findings of craniocervical junction venography performed in eight patients corresponded with those on MR imaging, other than with regard to the PCV. Contrast-enhanced 3-D fast SPGR allows visualization of the detailed anatomy of these venous structures, and this technique facilitates interventions and description of pathologies occurring in this area.", "title": "" }, { "docid": "ba3315636b720625e7b285b26d8d371a", "text": "Sharing of physical infrastructure using virtualization presents an opportunity to improve the overall resource utilization. It is extremely important for a Software as a Service (SaaS) provider to understand the characteristics of the business application workload in order to size and place the virtual machine (VM) containing the application. A typical business application has a multi-tier architecture and the application workload is often predictable. Using the knowledge of the application architecture and statistical analysis of the workload, one can obtain an appropriate capacity and a good placement strategy for the corresponding VM. In this paper we propose a tool iCirrus-WoP that determines VM capacity and VM collocation possibilities for a given set of application workloads. We perform an empirical analysis of the approach on a set of business application workloads obtained from geographically distributed data centers. The iCirrus-WoP tool determines the fixed reserved capacity and a shared capacity of a VM which it can share with another collocated VM. Based on the workload variation, the tool determines if the VM should be statically allocated or needs a dynamic placement. To determine the collocation possibility, iCirrus-WoP performs a peak utilization analysis of the workloads. The empirical analysis reveals the possibility of collocating applications running in different time-zones. The VM capacity that the tool recommends, show a possibility of improving the overall utilization of the infrastructure by more than 70% if they are appropriately collocated.", "title": "" }, { "docid": "f5ce55253aa69ca09fde79d6fd1c830d", "text": "We present an approach for high-resolution video frame prediction by conditioning on both past frames and past optical flows. Previous approaches rely on resampling past frames, guided by a learned future optical flow, or on direct generation of pixels. Resampling based on flow is insufficient because it cannot deal with disocclusions. Generative models currently lead to blurry results. Recent approaches synthesis a pixel by convolving input patches with a predicted kernel. However, their memory requirement increases with kernel size. Here, we present spatially-displaced convolution (SDC) module for video frame prediction. We learn a motion vector and a kernel for each pixel and synthesize a pixel by applying the kernel at a displaced location in the source image, defined by the predicted motion vector. Our approach inherits the merits of both vector-based and kernel-based approaches, while ameliorating their respective disadvantages. We train our model on 428K unlabelled 1080p video game frames. Our approach produces state-of-the-art results, achieving an SSIM score of 0.904 on high-definition YouTube-8M videos, 0.918 on Caltech Pedestrian videos. Our model handles large motion effectively and synthesizes crisp frames with consistent motion.", "title": "" }, { "docid": "f49864c2f892bf4058d953b6439bfdd1", "text": "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.", "title": "" }, { "docid": "b6d47dc227f767009c40599f65e25c5f", "text": "Radio frequency (RF) tomography is proposed to detect underground voids, such as tunnels or caches, over relatively wide areas of regard. The RF tomography approach requires a set of low-cost transmitters and receivers arbitrarily deployed on the surface of the ground or slightly buried. Using the principles of inverse scattering and diffraction tomography, a simplified theory for below-ground imaging is developed. In this paper, the principles and motivations in support of RF tomography are introduced. Furthermore, several inversion schemes based on arbitrarily deployed sensors are devised. Then, limitations to performance and system considerations are discussed. Finally, the effectiveness of RF tomography is demonstrated by presenting images reconstructed via the processing of synthetic data.", "title": "" }, { "docid": "49fbe9ddc3087c26ecc373c6731fca77", "text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didn’t consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.", "title": "" }, { "docid": "d03cf797d2796db7b5c8856828a747fd", "text": "Data-intensive applications need to address the problem of how to properly place the set of data items to distributed storage nodes. Traditional techniques use the hashing method to achieve the load balance among nodes such as those in Hadoop and Cassandra, but they do not work efficiently for the requests reading multiple data items in one transaction, especially when the source locations of requests are also distributed. Recent works proposed the managed data placement schemes for online social networks, but have a limited scope of applications due to their focuses. We propose an associated data placement (ADP) scheme, which improves the co-location of associated data and the localized data serving while ensuring the balance between nodes. In ADP, we employ the hypergraph partitioning technique to efficiently partition the set of data items and place them to the distributed nodes, and we also take replicas and incremental adjustment into considerations. Through extensive experiments with both synthesized and trace-based datasets, we evaluate the performance of ADP and demonstrate its effectiveness.", "title": "" }, { "docid": "e64caf71b75ac93f0426b199844f319b", "text": "INTRODUCTION\nVaginismus is mostly unknown among clinicians and women. Vaginismus causes women to have fear, anxiety, and pain with penetration attempts.\n\n\nAIM\nTo present a large cohort of patients based on prior published studies approved by an institutional review board and the Food and Drug Administration using a comprehensive multimodal vaginismus treatment program to treat the physical and psychologic manifestations of women with vaginismus and to record successes, failures, and untoward effects of this treatment approach.\n\n\nMETHODS\nAssessment of vaginismus included a comprehensive pretreatment questionnaire, the Female Sexual Function Index (FSFI), and consultation. All patients signed a detailed informed consent. Treatment consisted of a multimodal approach including intravaginal injections of onabotulinumtoxinA (Botox) and bupivacaine, progressive dilation under conscious sedation, indwelling dilator, follow-up and support with office visits, phone calls, e-mails, dilation logs, and FSFI reports.\n\n\nMAIN OUTCOME MEASURES\nLogs noting dilation progression, pain and anxiety scores, time to achieve intercourse, setbacks, and untoward effects. Post-treatment FSFI scores were compared with preprocedure scores.\n\n\nRESULTS\nOne hundred seventy-one patients (71%) reported having pain-free intercourse at a mean of 5.1 weeks (median = 2.5). Six patients (2.5%) were unable to achieve intercourse within a 1-year period after treatment and 64 patients (26.6%) were lost to follow-up. The change in the overall FSFI score measured at baseline, 3 months, 6 months, and 1 year was statistically significant at the 0.05 level. Three patients developed mild temporary stress incontinence, two patients developed a short period of temporary blurred vision, and one patient developed temporary excessive vaginal dryness. All adverse events resolved by approximately 4 months. One patient required retreatment followed by successful coitus.\n\n\nCONCLUSION\nA multimodal program that treated the physical and psychologic aspects of vaginismus enabled women to achieve pain-free intercourse as noted by patient communications and serial female sexual function studies. Further studies are indicated to better understand the individual components of this multimodal treatment program. Pacik PT, Geletta S. Vaginismus Treatment: Clinical Trials Follow Up 241 Patients. Sex Med 2017;5:e114-e123.", "title": "" }, { "docid": "80d0174bb4ce87af1b6802b1d6b5ecb4", "text": "JPEG file format standards define only a limited number of mandatory data structures and leave room for interpretation. Differences between implementations employed in digital cameras, image processing software, and software to edit metadata provide valuable clues for basic authentication of digital images. We show that there exists a realistic chance to fool state-of-the-art image file forensic methods using available software tools and introduce the analysis of ordered data structures on the example of JPEG file formats and the EXIF metadata format as countermeasure. The proposed analysis approach enables basic investigations of image authenticity and documents a much better trustworthiness of EXIF metadata than commonly accepted. Manipulations created with the renowned metadata editor ExifTool and various image processing software can be reliably detected. Analysing the sequence of elements in complex data structures is not limited to JPEG files and might be a general principle applicable to different multimedia formats.", "title": "" }, { "docid": "b81b29c232fb9cb5dcb2dd7e31003d77", "text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.", "title": "" }, { "docid": "6f2eba91ce941a1300f1b9cdfc6ffb51", "text": "This paper presents an efficient method to construct optimal facial animation blendshapes from given blendshape sketches and facial motion capture data. At first, a mapping function is established between “Marker Face” of target and performer by RBF interpolating selected feature points. Sketched blendshapes are transferred to performer’s “Marker Face” by using motion vector adjustment technique. Then, the blendshapes of performer’s “Marker Face” are optimized according to the facial motion capture data. At last, the optimized blendshapes are inversely transferred to target facial model. Apart from that, this paper also proposes a method of computing blendshape weights from facial motion capture data more precisely. Experiments show that expressive facial animation can be acquired.", "title": "" }, { "docid": "68e646d8aa50b331b1218a6b049d401f", "text": "In this paper we address the problem of clustering trajectories, namely sets of short sequences of data measured as a function of a dependent variable such as time. Examples include storm path trajectories, longitudinal data such as drug therapy response, functional expression data in computational biology, and movements of objects or individuals in video sequences. Our clustering algorithm is based on a principled method for probabilistic modelling of a set of trajectories as individual sequences of points generated from a nite mixture model consisting of regression model components. Unsupervised learning is carried out using maximum likelihood principles. Speci cally, the EM algorithm is used to cope with the hidden data problem (i.e., the cluster memberships). We also develop generalizations of the method to handle non-parametric (kernel) regression components as well as multi-dimensional outputs. Simulation results comparing our method with other clustering methods such as K-means and Gaussian mixtures are presented as well as experimental results on real data sets. 0 5 10 15 20 25 30 35 200 220 240 260 280 300 320 340 Frame Number (in video sequence) V er tic al p ix el c oo rd in at e of e st im at ed c en tr oi d of h an d Figure 1: Trajectories of the estimated vertical position of a moving hand as a function of time, estimated from 6 di erent video sequences.", "title": "" }, { "docid": "67db336c7de0cff2df34e265a219e838", "text": "Machine reading aims to automatically extract knowledge from text. It is a long-standing goal of AI and holds the promise of revolutionizing Web search and other fields. In this paper, we analyze the core challenges of machine reading and show that statistical relational AI is particularly well suited to address these challenges. We then propose a unifying approach to machine reading in which statistical relational AI plays a central role. Finally, we demonstrate the promise of this approach by presenting OntoUSP, an end-toend machine reading system that builds on recent advances in statistical relational AI and greatly outperforms state-of-theart systems in a task of extracting knowledge from biomedical abstracts and answering questions.", "title": "" }, { "docid": "e89db5214e5bea32b37539471fccb226", "text": "In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.", "title": "" }, { "docid": "49d6d1b6df0a016550fcd6a00358f7af", "text": "Protein translation typically begins with the recruitment of the 43S ribosomal complex to the 5' cap of mRNAs by a cap-binding complex. However, some transcripts are translated in a cap-independent manner through poorly understood mechanisms. Here, we show that mRNAs containing N(6)-methyladenosine (m(6)A) in their 5' UTR can be translated in a cap-independent manner. A single 5' UTR m(6)A directly binds eukaryotic initiation factor 3 (eIF3), which is sufficient to recruit the 43S complex to initiate translation in the absence of the cap-binding factor eIF4E. Inhibition of adenosine methylation selectively reduces translation of mRNAs containing 5'UTR m(6)A. Additionally, increased m(6)A levels in the Hsp70 mRNA regulate its cap-independent translation following heat shock. Notably, we find that diverse cellular stresses induce a transcriptome-wide redistribution of m(6)A, resulting in increased numbers of mRNAs with 5' UTR m(6)A. These data show that 5' UTR m(6)A bypasses 5' cap-binding proteins to promote translation under stresses.", "title": "" }, { "docid": "98e7159dc21e81f7144b8a6edd47441e", "text": "Non-maximum suppression is an integral part of the object detection pipeline. First, it sorts all detection boxes on the basis of their scores. The detection box M with the maximum score is selected and all other detection boxes with a significant overlap (using a pre-defined threshold) with M are suppressed. This process is recursively applied on the remaining boxes. As per the design of the algorithm, if an object lies within the predefined overlap threshold, it leads to a miss. To this end, we propose Soft-NMS, an algorithm which decays the detection scores of all other objects as a continuous function of their overlap with M. Hence, no object is eliminated in this process. Soft-NMS obtains consistent improvements for the coco-style mAP metric on standard datasets like PASCAL VOC2007 (1.7% for both R-FCN and Faster-RCNN) and MS-COCO (1.3% for R-FCN and 1.1% for Faster-RCNN) by just changing the NMS algorithm without any additional hyper-parameters. Using Deformable-RFCN, Soft-NMS improves state-of-the-art in object detection from 39.8% to 40.9% with a single model. Further, the computational complexity of Soft-NMS is the same as traditional NMS and hence it can be efficiently implemented. Since Soft-NMS does not require any extra training and is simple to implement, it can be easily integrated into any object detection pipeline. Code for Soft-NMS is publicly available on GitHub http://bit.ly/2nJLNMu.", "title": "" }, { "docid": "6616607ee5a856a391131c5e2745bc79", "text": "Project management (PM) landscaping is continually changing in the IT industry. Working with the small teams and often with the limited budgets, while facing frequent changes in the business requirements, project managers are under continuous pressure to deliver fast turnarounds. Following the demands of the IT project management, leaders in this industry are optimizing and adopting different and new more effective styles and strategies. This paper proposes a new hybrid way of managing IT projects, flexibly combining the traditional and the Agile method. Also, it investigates what is the necessary organizational transition in an IT company, required before converting from the traditional to the proposed new hybrid method.", "title": "" }, { "docid": "d6cb714b47b056e1aea8ef0682f4ae51", "text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.", "title": "" } ]
scidocsrr
92f27171e77957b767863663922db45e
Efficient and Robust Question Answering from Minimal Context over Documents
[ { "docid": "22accfa74592e8424bdfe74224365425", "text": "In the SQuaD reading comprehension task systems are given a paragraph from Wikipedia and have to answer a question about it. The answer is guaranteed to be contained within the paragraph. There are 107,785 such paragraph-question-answer tuples in the dataset. Human performance on this task achieves 91.2% accuracy (F1), and the current state-of-the-art system obtains a respectably close 84.7%. Not so fast though! If we adversarially add a single sentence to those paragraphs, in such a way that the added sentences do not contradict the correct answer, nor do they confuse humans, the accuracy of the published models studied plummets from an average of 75% to just 36%.", "title": "" } ]
[ { "docid": "6b97ad3fc20e56f28ae5bf7c6fd0eb57", "text": "We propose a new model of steganography based on a list of pseudo-randomly sorted sequences of characters. Given a list L of m columns containing n distinct strings each, with low or no semantic relationship between columns taken two by two, and a secret message s ∈ {0, 1}∗, our model embeds s in L block by block, by generating, for each column of L, a permutation number and by reordering strings contained in it according to that number. Where, letting l be average bit length of a string, the embedding capacity is given by [(m − 1) ∗ log2(n! − 1)/n ∗ l]. We’ve shown that optimal efficiency of the method can be obtained with the condition that (n >> l). The results which has been obtained by experiments, show that our model performs a better hiding process than some of the important existing methods, in terms of hiding capacity.", "title": "" }, { "docid": "81cae27233c3e6a56f382dfb28c996c2", "text": "Robust face recognition (FR) is an active topic in computer vision and biometrics, while face occlusion is one of the most challenging problems for robust FR. Recently, the representation (or coding) based FR schemes with sparse coding coefficients and coding residual have demonstrated good robustness to face occlusion; however, the high complexity of l1-minimization makes them less useful in practical applications. In this paper we propose a novel coding residual map learning scheme for fast and robust FR based on the fact that occluded pixels usually have higher coding residuals when representing an occluded face image over the non-occluded training samples. A dictionary is learned to code the training samples, and the distribution of coding residuals is computed. Consequently, a residual map is learned to detect the occlusions by adaptive thresholding. Finally the face image is identified by masking the detected occlusion pixels from face representation. Experiments on benchmark databases show that the proposed scheme has much lower time complexity but comparable FR accuracy with other popular approaches.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "a45b4d0237fdcfedf973ec639b1a1a36", "text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.", "title": "" }, { "docid": "da69ac86355c5c514f7e86a48320dcb3", "text": "Current approaches to semantic parsing, the task of converting text to a formal meaning representation, rely on annotated training data mapping sentences to logical forms. Providing this supervision is a major bottleneck in scaling semantic parsers. This paper presents a new learning paradigm aimed at alleviating the supervision burden. We develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world. In addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing our parser to scale better using less supervision. Our results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parsers.", "title": "" }, { "docid": "33ef514ef6ea291ad65ed6c567dbff37", "text": "In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4% by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5% absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20% relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications.", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "959a43b6b851a4a255466296efac7299", "text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.", "title": "" }, { "docid": "e5b2aa76e161661ea613912ba40695bd", "text": "Three meanings of “information” are distinguished: “Information-as-process”; “information-as-knowledge”; and “information-as-thing,” the attributive use of “information” to denote things regarded as informative. The nature and characteristics of “information-asthing” are discussed, using an indirect approach (“What things are informative?“). Varieties of “informationas-thing” include data, text, documents, objects, and events. On this view “information” includes but extends beyond communication. Whatever information storage and retrieval systems store and retrieve is necessarily “information-as-thing.” These three meanings of “information,” along with “information processing,” offer a basis for classifying disparate information-related activities (e.g., rhetoric, bibliographic retrieval, statistical analysis) and, thereby, suggest a topography for “information science.”", "title": "" }, { "docid": "44de39859665488f8df950007d7a01c6", "text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,", "title": "" }, { "docid": "8f9c8188fb22c4aee1f7b066d24e3793", "text": "The objective of unsupervised domain adaptation is to leverage features from a labeled source domain and learn a classifier for an unlabeled target domain, with a similar but different data distribution. Most deep learning approaches to domain adaptation consist of two steps: (i) learn features that preserve a low risk on labeled samples (source domain) and (ii) make the features from both domains to be as indistinguishable as possible, so that a classifier trained on the source can also be applied on the target domain. In general, the classifiers in step (i) consist of fully-connected layers applied directly on the indistinguishable features learned in (ii). In this paper, we propose a different way to do the classification, using similarity learning. The proposed method learns a pairwise similarity function in which classification can be performed by computing similarity between prototype representations of each category. The domain-invariant features and the categorical prototype representations are learned jointly and in an end-to-end fashion. At inference time, images from the target domain are compared to the prototypes and the label associated with the one that best matches the image is outputed. The approach is simple, scalable and effective. We show that our model achieves state-of-the-art performance in different unsupervised domain adaptation scenarios.", "title": "" }, { "docid": "b6bd380108803bec62dae716d9e0a83e", "text": "With the advent of statistical modeling in sports, predicting the outcome of a game has been established as a fundamental problem. Cricket is one of the most popular team games in the world. With this article, we embark on predicting the outcome of a One Day International (ODI) cricket match using a supervised learning approach from a team composition perspective. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual player’s batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Player independent factors have also been considered in order to predict the outcome of a match. We show that the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers.", "title": "" }, { "docid": "c9284c30e686c1fe1b905b776b520e0e", "text": "Two decades since the idea of using software diversity for security was put forward, ASLR is the only technique to see widespread deployment. This is puzzling since academic security researchers have published scores of papers claiming to advance the state of the art in the area of code randomization. Unfortunately, these improved diversity techniques are generally less deployable than integrity-based techniques, such as control-flow integrity, due to their limited compatibility with existing optimization, development, and distribution practices. This paper contributes yet another diversity technique called pagerando. Rather than trading off practicality for security, we first and foremost aim for deployability and interoperability. Most code randomization techniques interfere with memory sharing and deduplication optimization across processes and virtual machines, ours does not. We randomize at the granularity of individual code pages but never rewrite page contents. This also avoids incompatibilities with code integrity mechanisms that only allow signed code to be mapped into memory and prevent any subsequent changes. On Android, pagerando fully adheres to the default SELinux policies. All practical mitigations must interoperate with unprotected legacy code, our implementation transparently interoperates with unmodified applications and libraries. To support our claims of practicality, we demonstrate that our technique can be integrated into and protect all shared libraries shipped with stock Android 6.0. We also consider hardening of non-shared libraries and executables and other concerns that must be addressed to put software diversity defenses on par with integrity-based mitigations such as CFI.", "title": "" }, { "docid": "9caaf7c3c2e01e8625fc566db4913df1", "text": "It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers information via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes.", "title": "" }, { "docid": "458633abcbb030b9e58e432d5b539950", "text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.", "title": "" }, { "docid": "74c7895313a2f98a5dd4e5c9d5c664bf", "text": "The research was conducted to identify the presence of protein by indicating amide groups and measuring its level in food through specific groups of protein using FTIR (Fourier Transformed Infrared) method. The scanning process was conducted on wavenumber 400—4000 cm -1 . The determination of functional group was being done by comparing wavenumber of amide functional groups of the protein samples to existing standard. Protein level was measured by comparing absorbance of protein specific functional groups to absorbance of fatty acid functional groups. Result showed the FTIR spectrums of all samples were on 557-3381 cm -1 wavenumber range. The amides detected were Amide III, IV, and VI with absorbance between trace until 0.032%. The presence of protein can be detected in samples animal and vegetable cheese, butter, and milk through functional groups of amide III, IV, and VI were on 1240-1265 cm -1 , 713-721 cm -1 , and 551-586 cm -1 wavenumber respectively . Urine was detected through functional groups of amide III and IV were on 1639 cm -1 and 719 cm -1 wavenumber. The protein level of animal cheese, vegetable cheese, butter, and milk were 1.01%, 1.0%, 0.86%, and 1.55% respectively.", "title": "" }, { "docid": "5ab4bb5923bf589436651783a6627a0d", "text": "A capacity fade prediction model has been developed for Li-ion cells based on a semi-empirical approach. Correlations for variation of capacity fade parameters with cycling were obtained with two different approaches. The first approach takes into account only the active material loss, while the second approach includes rate capability losses too. Both methods use correlations for variation of the film resistance with cycling. The state of charge (SOC) of the limiting electrode accounts for the active material loss. The diffusion coefficient of the limiting electrode was the parameter to account for the rate capability losses during cycling. © 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "d010a2f8240ff9f6704cde917cb85cf0", "text": "OBJECTIVE\nAlthough psychological modulation of immune function is now a well-established phenomenon, much of the relevant literature has been published within the last decade. This article speculates on future directions for psychoneuroimmunology research, after reviewing the history of the field.\n\n\nMETHODS\nThis review focuses on human psychoneuroimmunology studies published since 1939, particularly those that have appeared in Psychosomatic Medicine. Studies were clustered according to key themes, including stressor duration and characteristics (laboratory stressors, time-limited naturalistic stressors, or chronic stress), as well as the influences of psychopathology, personality, and interpersonal relationships; the responsiveness of the immune system to behavioral interventions is also addressed. Additionally, we describe trends in populations studied and the changing nature of immunological assessments. The final section focuses on health outcomes and future directions for the field.\n\n\nRESULTS\nThere are now sufficient data to conclude that immune modulation by psychosocial stressors or interventions can lead to actual health changes, with the strongest direct evidence to date in infectious disease and wound healing. Furthermore, recent medical literature has highlighted a spectrum of diseases whose onset and course may be influenced by proinflammatory cytokines, from cardiovascular disease to frailty and functional decline; proinflammatory cytokine production can be directly stimulated by negative emotions and stressful experiences and indirectly stimulated by chronic or recurring infections. Accordingly, distress-related immune dysregulation may be one core mechanism behind a diverse set of health risks associated with negative emotions.\n\n\nCONCLUSIONS\nWe suggest that psychoneuroimmunology may have broad implications for the basic biological sciences and medicine.", "title": "" }, { "docid": "833786dcf2288f21343d60108819fe49", "text": "This paper describes an audio event detection system which automatically classifies an audio event as ambient noise, scream or gunshot. The classification system uses two parallel GMM classifiers for discriminating screams from noise and gunshots from noise. Each classifier is trained using different features, appropriately chosen from a set of 47 audio features, which are selected according to a 2-step process. First, feature subsets of increasing size are assembled using filter selection heuristics. Then, a classifier is trained and tested with each feature subset. The obtained classification performance is used to determine the optimal feature vector dimension. This allows a noticeable speed-up w.r.t. wrapper feature selection methods. In order to validate the proposed detection algorithm, we carried out extensive experiments on a rich set of gunshots and screams mixed with ambient noise at different SNRs. Our results demonstrate that the system is able to guarantee a precision of 90% at a false rejection rate of 8%.", "title": "" }, { "docid": "4e6ca2d20e904a0eb72fcdcd1164a5e2", "text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.", "title": "" } ]
scidocsrr
e6fa79cd48df7d3f335e4edd8a191033
The Constant Comparative Analysis Method Outside of Grounded Theory
[ { "docid": "e5b543b8880ec436874bee6b03a58618", "text": "This paper outlines my concerns with Qualitative Data Analysis’ (QDA) numerous remodelings of Grounded Theory (GT) and the subsequent eroding impact. I cite several examples of the erosion and summarize essential elements of classic GT methodology. It is hoped that the article will clarify my concerns with the continuing enthusiasm but misunderstood embrace of GT by QDA methodologists and serve as a preliminary guide to novice researchers who wish to explore the fundamental principles of GT.", "title": "" } ]
[ { "docid": "e5fb0cb43868a0a8584a515f8fbb1e20", "text": "Nowadays organizations generate large amount of data. Only a few make a good use to optimize the performance of the business. Process mining appears as a branch of the data science that tries to understand the actual operational processes in the organizations through different algorithms, allowing the discovery of process models to give insight of the processes and understand how they can be improved. In this work different process mining techniques are applied to a company dedicated to the advertisement market, specifically the process of dealing with contract issues with customers. The Process Mining Project Methodology was followed to execute a case study. Additional to the basic methodology, elements from the others areas of studies were added to generate better results and have a better understanding of the problem. The case study includes three scenarios with three different hypotheses that were validated through our method.", "title": "" }, { "docid": "4239d27174101a90374b48acf0a88325", "text": "Recent advances in manufacturing industry, and notably in the Industry 4.0 context, promote the development of CPSs and consequently give rise to a number of issues to be solved. The present paper describes the context of the extension of mechatronic systems to cyber-physical ones, firstly by highlighting their similarities and differences, and then by underlining the current needs for CPSs in the manufacturing sector. Then, the paper presents the main research issues related to CPS design and, in particular, the needs for an integrated and multi-scale designing approach to prevent conflicts across different design domains early enough within the CPS development process. To this aim, the impact of the extension from mechatronic to Cyber-Physical Systems on their design is examined through a set of existing related modelling techniques. The multi-scalability requirement of these techniques is firstly described, concerning external/internal interactions, process control, behaviour simulation, representation of topological relationships and interoperability through a multi-agent platform, and then applied to the case study of a tablets manufacturing process. Finally, the proposed holistic description of such a multi-scale manufacturing CPS allows to outline the main characteristics of a modelling-simulation platform, able notably to bridge the semantic gaps existing between the different designing disciplines and specialised domains. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dac5090c367ef05c8863da9c7979a619", "text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.", "title": "" }, { "docid": "19b9445fb89be143d1c32691e5e3a64b", "text": "The typical approach for solving the problem of single-image super-resolution (SR) is to learn a nonlinear mapping between the low-resolution (LR) and high-resolution (HR) representations of images in a training set. Training-based approaches can be tuned to give high accuracy on a given class of images, but they call for retraining if the HR <inline-formula><tex-math notation=\"LaTeX\">$\\rightarrow$</tex-math></inline-formula> LR generative model deviates or if the test images belong to a different class, which limits their applicability. On the other hand, we propose a solution that does not require a training dataset. Our method relies on constructing a dynamic convolutional network (DCN) to learn the relation between the consecutive scales of Gaussian and Laplacian pyramids. The relation is in turn used to predict the detail at a finer scale, thus leading to SR. Comparisons with state-of-the-art techniques on standard datasets show that the proposed DCN approach results in about 0.8 and 0.3 dB gain in peak signal-to-noise ratio for <inline-formula><tex-math notation=\"LaTeX\">$2\\times$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$3\\times$</tex-math></inline-formula> SR, respectively. The structural similarity index is on par with the competing techniques.", "title": "" }, { "docid": "c9431b5a214dba08ca50706a27b2af7c", "text": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).", "title": "" }, { "docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44", "text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2", "title": "" }, { "docid": "4a6c2d388bb114751b2ce9c6df55beab", "text": "To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider \"quantified self\" movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token \"mcdonalds\" or the category \"dessert\" being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the \"quick added calories\" functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries.", "title": "" }, { "docid": "fcbd256ad05ef96c9f2997fbfbace473", "text": "The Internet of Things (IoT) envisions a world-wide, interconnected network of smart physical entities. These physical entities generate a large amount of data in operation, and as the IoT gains momentum in terms of deployment, the combined scale of those data seems destined to continue to grow. Increasingly, applications for the IoT involve analytics. Data analytics is the process of deriving knowledge from data, generating value like actionable insights from them. This article reviews work in the IoT and big data analytics from the perspective of their utility in creating efficient, effective, and innovative applications and services for a wide spectrum of domains. We review the broad vision for the IoT as it is shaped in various communities, examine the application of data analytics across IoT domains, provide a categorisation of analytic approaches, and propose a layered taxonomy from IoT data to analytics. This taxonomy provides us with insights on the appropriateness of analytical techniques, which in turn shapes a survey of enabling technology and infrastructure for IoT analytics. Finally, we look at some tradeoffs for analytics in the IoT that can shape future research.", "title": "" }, { "docid": "14d3712efca71981103ba3ab44c39dd2", "text": "This paper is survey of computational approaches for paraphrasing. Paraphrasing methods such as generation, identification and acquisition of phrases or sentences is a process that conveys same information. Paraphrasing is a process of expressing semantic content of source using different words to achieve the greater clarity. The task of generating or identifying the semantic equivalence for different elements of language such as words sentences; is an essential part of the natural language processing. Paraphrasing is being used for various natural language applications. This paper discuses paraphrase impact on few applications and also various paraphrasing methods.", "title": "" }, { "docid": "90e218a8ae79dc1d53e53d4eb63839b8", "text": "Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to “ride-through” grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.", "title": "" }, { "docid": "6b25852df72c26b1467d4c51213ca122", "text": "This paper presents a study of spectral clustering-based approaches to acoustic segment modeling (ASM). ASM aims at finding the underlying phoneme-like speech units and building the corresponding acoustic models in the unsupervised setting, where no prior linguistic knowledge and manual transcriptions are available. A typical ASM process involves three stages, namely initial segmentation, segment labeling, and iterative modeling. This work focuses on the improvement of segment labeling. Specifically, we use posterior features as the segment representations, and apply spectral clustering algorithms on the posterior representations. We propose a Gaussian component clustering (GCC) approach and a segment clustering (SC) approach. GCC applies spectral clustering on a set of Gaussian components, and SC applies spectral clustering on a large number of speech segments. Moreover, to exploit the complementary information of different posterior representations, a multiview segment clustering (MSC) approach is proposed. MSC simultaneously utilizes multiple posterior representations to cluster speech segments. To address the computational problem of spectral clustering in dealing with large numbers of speech segments, we use inner product similarity graph and make reformulations to avoid the explicit computation of the affinity matrix and Laplacian matrix. We carried out two sets of experiments for evaluation. First, we evaluated the ASM accuracy on the OGI-MTS dataset, and it was shown that our approach could yield 18.7% relative purity improvement and 15.1% relative NMI improvement compared with the baseline approach. Second, we examined the performances of our approaches in the real application of zero-resource query-by-example spoken term detection on SWS2012 dataset, and it was shown that our approaches could provide consistent improvement on four different testing scenarios with three evaluation metrics.", "title": "" }, { "docid": "dfee7f5f17ff6b0527823ae920b9977a", "text": "This paper introduces a Linux audio application that provides an integrated solution for making full 3-D Ambisonics recordings by using a tetrahedral microphone. Apart from the basic A to B format conversion it performs a number of auxiliary functions such as LF filtering, metering and monitoring, turning it into a complete Ambisonics recording processor. It also allows for calibration of an individual microphone unit based on measured impulse responses. A new JACK backend required to make use of a particular four-channel audio interface optimised for Ambisonic recording is also introduced.", "title": "" }, { "docid": "af0df66f001ffd9601ac3c89edf6af0f", "text": "State-of-the-art speech recognition systems rely on fixed, handcrafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-toend systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.", "title": "" }, { "docid": "a377b31c0cb702c058f577ca9c3c5237", "text": "Problem statement: Extensive research efforts in the area of Natural L anguage Processing (NLP) were focused on developing reading comprehens ion Question Answering systems (QA) for Latin based languages such as, English, French and German . Approach: However, little effort was directed towards the development of such systems for bidirec tional languages such as Arabic, Urdu and Farsi. In general, QA systems are more sophisticated and more complex than Search Engines (SE) because they seek a specific and somewhat exact answer to the query. Results: Existing Arabic QA system including the most recent described excluded one or both types of questions (How and Why) from their work because of the difficulty of handling these questions. In this study, we present a new approach and a new questio nanswering system (QArabPro) for reading comprehensi on texts in Arabic. The overall accuracy of our system is 84%. Conclusion/Recommendations: These results are promising compared to existing systems. Our system handles all types of questions including (How and why).", "title": "" }, { "docid": "e0b3ef309047e59849d5f4381603b378", "text": "Thermistor characteristic equation directly determines the temperature measurement accuracy. Three fitting equation of NTC thermistors and their corresponding mathematic solutions are introduced. An adaptive algorithm based on cross-validation is proposed to determine the degree of chebyshev polynomials equation. The experiment indicates that the method of least squares for Steinhart-Hart equation and chebyshev polynomials equation has higher accuracy, and the equation determined by adaptive algorithm for the chebyshev polynomials method has better performance.", "title": "" }, { "docid": "d9f2abb9735b449b622f94e5af346364", "text": "Abstract—The goal of this paper is to present an addressing scheme that allows for assigning a unique IPv6 address to each node in the Internet of Things (IoT) network. This scheme guarantees uniqueness by extracting the clock skew of each communication device and converting it into an IPv6 address. Simulation analysis confirms that the presented scheme provides reductions in terms of energy consumption, communication overhead and response time as compared to four studied addressing schemes Strong DAD, LEADS, SIPA and CLOSA.", "title": "" }, { "docid": "c09fc633fd17919f45ccc56c4a28ceef", "text": "The 6-pole UHF helical resonators filter was designed, simulated, fabricated, and tested. The design factors, simulation results, filter performance characteristics are presented in this paper. The coupling of helical resonators was designed using a mode-matching technique. The design procedures are simple, and measured performance is excellent. The simulated and measured results show the validity of the proposed design method.", "title": "" }, { "docid": "5f4235a8f9095afe6697c9fdb00e0a43", "text": "Typically, firms decide whether or not to develop a new product based on their resources, capabilities and the return on investment that the product is estimated to generate. We propose that firms adopt a broader heuristic for making new product development choices. Our heuristic approach requires moving beyond traditional finance-based thinking, and suggests that firms concentrate on technological trajectories by combining technology roadmapping, information technology (IT) and supply chain management to make more sustainable new product development decisions. Using the proposed holistic heuristic methods, versus relying on traditional finance-based decision-making tools (e.g., emphasizing net present value or internal rate of return projections), enables firms to plan beyond the short-term and immediate set of technologies at hand. Our proposed heuristic approach enables firms to forecast technologies and markets, and hence, new product priorities in the longer term. Investments in new products should, as a result, generate returns over a longer period than traditionally expected, giving firms more sustainable investments. New products are costly and need to have a 0040-1625/$ – see front matter D 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0040-1625(03)00064-7 * Corresponding author. Tel.: +1-814-863-7133. E-mail addresses: ijpetrick@psu.edu (I.J. Petrick), aie1@psu.edu (A.E. Echols). 1 Tel.: +1-814-863-0642. I.J. Petrick, A.E. Echols / Technological Forecasting & Social Change 71 (2004) 81–100 82 durable presence in the market. Transaction costs and resources will be saved, as firms make new product development decisions less frequently. D 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "23023d2f54433e05d7dd2a799e1c522d", "text": "The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of at any reasonable point of the real axis.", "title": "" }, { "docid": "5f6f0bd98fa03e4434fabe18642a48bc", "text": "Previous research suggests that women's genital arousal is an automatic response to sexual stimuli, whereas men's genital arousal is dependent upon stimulus features specific to their sexual interests. In this study, we tested the hypothesis that a nonhuman sexual stimulus would elicit a genital response in women but not in men. Eighteen heterosexual women and 18 heterosexual men viewed seven sexual film stimuli, six human films and one nonhuman primate film, while measurements of genital and subjective sexual arousal were recorded. Women showed small increases in genital arousal to the nonhuman stimulus and large increases in genital arousal to both human male and female stimuli. Men did not show any genital arousal to the nonhuman stimulus and demonstrated a category-specific pattern of arousal to the human stimuli that corresponded to their stated sexual orientation. These results suggest that stimulus features necessary to evoke genital arousal are much less specific in women than in men.", "title": "" } ]
scidocsrr
98330ebbfb9f541d6ca2ee49c108b574
AM An Ontology-Based Forensic Analysis Tool
[ { "docid": "0305bac1e39203b49b794559bfe0b376", "text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.", "title": "" } ]
[ { "docid": "88f43c85c32254a5c2859e983adf1c43", "text": "This study observed naturally occurring emergent leadership behavior in distributed virtual teams. The goal of the study was to understand how leadership behaviors emerge and are distributed in these kinds of teams. Archived team interaction captured during the course of a virtual collaboration exercise was analyzed using an a priori content analytic scheme derived from behaviorally-based leadership theory to capture behavior associated with leadership in virtual environments. The findings lend support to the notion that behaviorally-based leadership theory can provide insights into emergent leadership in virtual environments. This study also provides additional insights into the patterns of leadership that emerge in virtual environments and relationship to leadership behaviors.", "title": "" }, { "docid": "a066ff1b4dfa65a67b79200366021542", "text": "OBJECTIVES\nWe sought to assess the shave biopsy technique, which is a new surgical procedure for complete removal of longitudinal melanonychia. We evaluated the quality of the specimen submitted for pathological examination, assessed the postoperative outcome, and ascertained its indication between the other types of matrix biopsies.\n\n\nDESIGN\nThis was a retrospective study performed at the dermatologic departments of the Universities of Liège and Brussels, Belgium, of 30 patients with longitudinal or total melanonychia.\n\n\nRESULTS\nPathological diagnosis was made in all cases; 23 patients were followed up during a period of 6 to 40 months. Seventeen patients had no postoperative nail plate dystrophy (74%) but 16 patients had recurrence of pigmentation (70%).\n\n\nLIMITATIONS\nThis was a retrospective study.\n\n\nCONCLUSIONS\nShave biopsy is an effective technique for dealing with nail matrix lesions that cause longitudinal melanonychia over 4 mm wide. Recurrence of pigmentation is the main drawback of the procedure.", "title": "" }, { "docid": "b899a5effd239f1548128786d5ae3a8f", "text": "As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator. Edward Balaban et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "2a451c58ee4d7959857a3a7a0397300d", "text": "The Software Defined Networking (SDN) paradigm introduces separation of data and control planes for flow-switched networks and enables different approaches to network security than those existing in present IP networks. The centralized control plane, i.e. the SDN controller, can host new security services that profit from the global view of the network and from direct control of switches. Some security services can be deployed as external applications that communicate with the controller. Due to the fact that all unknown traffic must be transmitted for investigation to the controller, maliciously crafted traffic can lead to Denial Of Service (DoS) attack on it. In this paper we analyse features of SDN in the context of security application. Additionally we point out some aspects of SDN networks that, if changed, could improve SDN network security capabilities. Moreover, the last section of the paper presents a detailed description of security application that detects a broad kind of malicious activity using key features of SDN architecture.", "title": "" }, { "docid": "52cde6191c79d085127045a62deacf31", "text": "Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain. One of the important parameters in the Arcade Learning Environment (ALE, [Bellemare et al., 2013]) is the frame skip rate. It decides the granularity at which agents can control game play. A frame skip value of k allows the agent to repeat a selected action k number of times. The current state of the art architectures like Deep QNetwork (DQN, [Mnih et al., 2015]) and Dueling Network Architectures (DuDQN, [Wang et al., 2015]) consist of a framework with a static frame skip rate, where the action output from the network is repeated for a fixed number of frames regardless of the current state. In this paper, we propose a new architecture, Dynamic Frame skip Deep Q-Network (DFDQN) which makes the frame skip rate a dynamic learnable parameter. This allows us to choose the number of times an action is to be repeated based on the current state. We show empirically that such a setting improves the performance on relatively harder games like Seaquest.", "title": "" }, { "docid": "bd4a803ab3fe729b77b5becfbcc83443", "text": "Recent work has shown impressive success in transferring painterly style to images. These approaches, however, fall short of photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. In this paper we propose an approach that takes as input a stylized image and makes it more photorealistic. It relies on the Screened Poisson Equation, maintaining the fidelity of the stylized image while constraining the gradients to those of the original input image. Our method is fast, simple, fully automatic and shows positive progress in making a stylized image photorealistic. Our results exhibit finer details and are less prone to artifacts than the state-of-the-art.", "title": "" }, { "docid": "06ae65d560af6e99cdc96495d32379d1", "text": "Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.", "title": "" }, { "docid": "0e5fc650834d883e291c2cf4ace91d35", "text": "The majority of practitioners express software requirements using natural text notations such as user stories. Despite the readability of text, it is hard for people to build an accurate mental image of the most relevant entities and relationships. Even converting requirements to conceptual models is not sufficient: as the number of requirements and concepts grows, obtaining a holistic view of the requirements becomes increasingly difficult and, eventually, practically impossible. In this paper, we introduce and experiment with a novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity. We build on two pillars: (i) clustering techniques for grouping elements into coherent sets so that a simplified overview of the concepts can be created, and (ii) state-of-the-art, corpus-based semantic relatedness algorithms between words to measure the extent to which two concepts are related. We build a proof-of-concept tool and evaluate our approach by applying it to requirements from four real-world data sets.", "title": "" }, { "docid": "83c5a4f830ca5ae92038828f45fdbe79", "text": "This study aimed to compare the effects of repeated restraint stress alone and the combination with clomipramine treatment on parameters of oxidative stress in cerebral cortex, striatum and hippocampus of male rats. Animals were divided into control and repeated restraint stress, and subdivided into treated or not with clomipramine. After 40 days of stress and 27 days of clomipramine treatment with 30 mg/kg, the repeated restraint stress alone reduced levels of Na+, K+-ATPase in all tissues studied. The combination of repeated restraint stress and clomipramine increased the lipid peroxidation, free radicals and CAT activity as well as decreased levels of NP-SH in the tissues studied. However, Na+, K+-ATPase level decreased in striatum and cerebral cortex and the SOD activity increased in hippocampus and striatum. Results indicated that clomipramine may have deleterious effects on the central nervous system especially when associated with repeated restraint stress and chronically administered in non therapeutic levels.", "title": "" }, { "docid": "d4cdea26217e90002a3c4522124872a2", "text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.", "title": "" }, { "docid": "c0025b54f12b3f813d2b51549320821f", "text": "BACKGROUND\nDespite the pervasive use of smartphones among university students, there is still a dearth of research examining the association between smartphone use and psychological well-being among this population. The current study addresses this research gap by investigating the relationship between smartphone use and psychological well-being among university students in Thailand.\n\n\nMETHODS\nThis cross-sectional study was conducted from January to March 2018 among university students aged 18-24 years from the largest university in Chiang Mai, Thailand. The primary outcome was psychological well-being, and was assessed using the Flourishing Scale. Smartphone use, the primary independent variable, was measured by five items which had been adapted from the eight-item Young Diagnostic Questionnaire for Internet Addiction. All scores above the median value were defined as being indicative of excessive smartphone use.\n\n\nRESULTS\nOut of the 800 respondents, 405 (50.6%) were women. In all, 366 (45.8%) students were categorized as being excessive users of smartphones. Students with excessive use of smartphones had lower scores the psychological well-being than those who did not use smartphone excessively (B = -1.60; P < 0.001). Female students had scores for psychological well-being that were, on average, 1.24 points higher than the scores of male students (P < 0.001).\n\n\nCONCLUSION\nThis study provides some of the first insights into the negative association between excessive smartphone use and the psychological well-being of university students. Strategies designed to promote healthy smartphone use could positively impact the psychological well-being of students.", "title": "" }, { "docid": "4952d426d0f2aed1daea234595dcd901", "text": "Clustering analysis is a primary method for data mining. Density clustering has such advantages as: its clusters are easy to understand and it does not limit itself to shapes of clusters. But existing density-based algorithms have trouble in finding out all the meaningful clusters for datasets with varied densities. This paper introduces a new algorithm called VDBSCAN for the purpose of varied-density datasets analysis. The basic idea of VDBSCAN is that, before adopting traditional DBSCAN algorithm, some methods are used to select several values of parameter Eps for different densities according to a k-dist plot. With different values of Eps, it is possible to find out clusters with varied densities simultaneity. For each value of Eps, DBSCAN algorithm is adopted in order to make sure that all the clusters with respect to corresponding density are clustered. And for the next process, the points that have been clustered are ignored, which avoids marking both denser areas and sparser ones as one cluster. Finally, a synthetic database with 2-dimension data is used for demonstration, and experiments show that VDBSCAN is efficient in successfully clustering uneven datasets.", "title": "" }, { "docid": "8eccba18f7729696c93ec603dd3adf82", "text": "According to a study released this July by Juniper Research, more than half the world's largest companies are now researching blockchain technologies with the goal of integrating them into their products. Projects are already under way that will disrupt the management of health care records, property titles, supply chains, and even our online identities. But before we remount the entire digital ecosystem on blockchain technology, it would be wise to take stock of what makes the approach unique and what costs are associated with it. Blockchain technology is, in essence, a novel way to manage data. As such, it competes with the data-management systems we already have. Relational databases, which orient information in updatable tables of columns and rows, are the technical foundation of many services we use today. Decades of market exposure and well-funded research by companies like Oracle Corp. have expanded the functionality and hardened the security of relational databases. However, they suffer from one major constraint: They put the task of storing and updating entries in the hands of one or a few entities, whom you have to trust won't mess with the data or get hacked.", "title": "" }, { "docid": "529b961ab285ff7f59276d680737e5fb", "text": "Clinical, electrophysiological and histological findings in four patients accidentally poisoned with the organophosphorus insecticide Dipterex are reported. Three to five weeks after insecticide ingestion signs of a distal sensorimotor (preponderantly motor) neuropathy occurred. The patients complained of paraesthesia in the lower limbs, and two of them of very disagreeable pricking sensation in the soles of the feet, responsive to carbamazepine. They showed distal weakness mainly of the legs, footdrop , difficult gait and muscle hypotonia. Ankle jerk was abolished while other tendon reflexes persisted. Two months or even later after poisoning, knee jerks in all the patients were very brisk and more and less accompanied by other pyramidal signs (patellar clonus, abolishment of abdominal cutaneous reflexes, Babinski's sign). Clinical, electrophysiological and nerve biopsy data revealed a \"dying-back\" neuropathy in our patients. Distal muscle fatigue was confirmed by failure of neuromuscular transmission on repetitive nerve stimulation.", "title": "" }, { "docid": "4c627f29b8006b81f4a2415004775cf9", "text": "Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.", "title": "" }, { "docid": "3ad25dabe3b740a91b939a344143ea9e", "text": "Recently, much attention in research and practice has been devoted to the topic of IT consumerization, referring to the adoption of private consumer IT in the workplace. However, research lacks an analysis of possible antecedents of the trend on an individual level. To close this gap, we derive a theoretical model for IT consumerization behavior based on the theory of planned behavior and perform a quantitative analysis. Our investigation shows that it is foremost determined by normative pressures, specifically the behavior of friends, co-workers and direct supervisors. In addition, behavioral beliefs and control beliefs were found to affect the intention to use non-corporate IT. With respect to the former, we found expected performance improvements and an increase in ease of use to be two of the key determinants. As for the latter, especially monetary costs and installation knowledge were correlated with IT consumerization intention.", "title": "" }, { "docid": "f971c7374e75fc82896db4b8a4a8a999", "text": "Body image disturbance and body dysmorphic disorder (BDD) have been researched from a variety of psychological approaches. Psychological inflexibility, or avoidance of one's own cognitive and affective states at a cost to personal values, may be a useful construct to understand these problems. In an effort to clarify the role of psychological inflexibility in body image disturbance and BDD, a measure was created based on the principles of Acceptance and Commitment Therapy (ACT). The scale was developed by generating new items to represent the construct and revising items from an existing scale measuring aspects of body image psychological inflexibility. The study was conducted with an ethnically diverse undergraduate population using three samples during the validation process. Participants completed multiple assessments to determine the validity of the measure and were interviewed for BDD. The 16-item scale has internal consistency (α = 0.93), a single factor solution, convergent validity, and test re-test reliability (r = 0.90). Data demonstrate a relationship between psychological inflexibility and body image disturbance indicating empirical support for an ACT conceptualization of body image problems and the use of this measure for body image disturbance and BDD.", "title": "" }, { "docid": "f3d934a354b44c79dfafb6bbb79b7f7c", "text": "The large number of rear end collisions due to driver inattention has been identified as a major automotive safety issue. Even a short advance warning can significantly reduce the number and severity of the collisions. This paper describes a vision based forward collision warning (FCW) system for highway safety. The algorithm described in this paper computes time to contact (TTC) and possible collision course directly from the size and position of the vehicles in the image - which are the natural measurements for a vision based system - without having to compute a 3D representation of the scene. The use of a single low cost image sensor results in an affordable system which is simple to install. The system has been implemented on real-time hardware and has been test driven on highways. Collision avoidance tests have also been performed on test tracks.", "title": "" }, { "docid": "c617b320a93e90e59cb4ed28025ce0cb", "text": "Recently, with the increased demand for low-power displays regarding portable devices, the RGBG PenTile display is popularly utilized. However, unlike the traditional RGB-stripe display with its three color channels, each pixel of the RGBG PenTile display comprises only two color channels, thereby causing the color leakage image distortion. To cope with this problem, most of the conventional methods employ a preprocessing filter for subpixel rendering; however, these filters cannot remove the color leakage completely, and they also result in the blurring artifacts. In this paper, a novel approach to obtain the preprocessing filter is presented. We formulate a filter design method as a minimum mean square error problem and derive an optimal preprocessing filter that is based on the human visual system (HVS) as follows; first, two perceived images indicating how human recognizes the images on the RGB and RGBG displays are generated and then the difference between the two perceived images is minimized to derive the optimal filter. In addition, in order to prevent the blurring artifact, the proposed filter is applied to the previously detected color-leakage region only. Experimental results demonstrate that the proposed method outperforms the conventional methods in terms of both the subjective and objective image quality.", "title": "" }, { "docid": "dd2e81d24584fe0684266217b732d881", "text": "In order to understand the role of titanium isopropoxide (TIPT) catalyst on insulation rejuvenation for water tree aged cables, dielectric properties and micro structure changes are investigated for the rejuvenated cables. Needle-shape defects are made inside cross-linked polyethylene (XLPE) cable samples to form water tree in the XLPE layer. The water tree aged samples are injected by the liquid with phenylmethyldimethoxy silane (PMDMS) catalyzed by TIPT for rejuvenation, and the breakdown voltage of the rejuvenated samples is significantly higher than that of the new samples. By the observation of scanning electronic microscope (SEM), the nano-TiO2 particles are observed inside the breakdown channels of the rejuvenated samples. Accordingly, the insulation performance of rejuvenated samples is significantly enhanced by the nano-TiO2 particles. Through analyzing the products of hydrolysis from TIPT, the nano-scale TiO2 particles are observed, and its micro-morphology is consistent with that observed inside the breakdown channels. According to the observation, the insulation enhancement mechanism is described. Therefore, the dielectric property of the rejuvenated cables is improved due to the nano-TiO2 produced by the hydrolysis from TIPT.", "title": "" } ]
scidocsrr
8bf6f0424abd8840820068131cc26a25
Uncovering Four Strategies to Approach Master Data Management
[ { "docid": "3105a48f0b8e45857e8d48e26b258e04", "text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.", "title": "" } ]
[ { "docid": "a89c0a16d161ef41603583567f85a118", "text": "360° Video services with resolutions of UHD and beyond for Virtual Reality head mounted displays are a challenging task due to limits of video decoders in constrained end devices. Adaptivity to the current user viewport is a promising approach but incurs significant encoding overhead when encoding per user or set of viewports. A more efficient way to achieve viewport adaptive streaming is to facilitate motion-constrained HEVC tiles. Original content resolution within the user viewport is preserved while content currently not presented to the user is delivered in lower resolution. A lightweight aggregation of varying resolution tiles into a single HEVC bitstream can be carried out on-the-fly and allows usage of a single decoder instance on the end device.", "title": "" }, { "docid": "ef6040561aaae594f825a6cabd4aa259", "text": "This study investigated the extent of young adults’ (N = 393; 17–30 years old) experience of cyberbullying, from the perspectives of cyberbullies and cyber-victims using an online questionnaire survey. The overall prevalence rate shows cyberbullying is still present after the schooling years. No significant gender differences were noted, however females outnumbered males as cyberbullies and cyber-victims. Overall no significant differences were noted for age, but younger participants were found to engage more in cyberbullying activities (i.e. victims and perpetrators) than the older participants. Significant differences were noted for Internet frequency with those spending 2–5 h online daily reported being more victimized and engage in cyberbullying than those who spend less than an hour daily. Internet frequency was also found to significantly predict cyber-victimization and cyberbullying, indicating that as the time spent on Internet increases, so does the chances to be bullied and to bully someone. Finally, a positive significant association was observed between cyber-victims and cyberbullies indicating that there is a tendency for cyber-victims to become cyberbullies, and vice versa. Overall it can be concluded that cyberbullying incidences are still taking place, even though they are not as rampant as observed among the younger users. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ad81d9367dd0ff4cdd7ac8dd1da69e06", "text": "Destruction of the midbrain dopamine (DA) system in Parkinsonian man and experimental animals leads to deficits in initiation of behavior, motor performance, and cognitive mechanisms. We have investigated the extracellular impulse activity of single midbrain DA neurons in unlesioned monkeys performing in a controlled behavioral task that was designed to paradigmatically test behavioral reactivity. Animals were trained to execute natural forelimb reaching movements for food reward in response to a trigger stimulus. Presumptive DA neurons were histologically located in the pars compacta of substantia nigra and in neighboring areas A8 and A10. They spontaneously discharged polyphasic impulses of relatively long duration (1.4-3.6 ms) and at low frequencies (0.5-8.5/s). Systemic injections of low doses of the DA autoreceptor agonist apomorphine (0.05-0.2 mg/kg) depressed the activity of virtually all thus tested DA neurons. In following established criteria, these characteristics strongly suggest the DAergic nature of the recorded neurons. The majority of midbrain DA neurons (70 of 128) responded to the behavioral trigger stimulus of the task with a short burst of impulses. Latencies ranged from 39 to 105 ms (median 65 ms) for onset and from 65 to 165 ms (median 95 ms) for peak of responses. Responses occurred before arm movement and at the time of or before onset of electromyographic (EMG) activity in prime mover muscles. Responses were time-locked to the stimulus and not to the onset of movement or EMG. Responses remained present in most neurons but were reduced when vision of the behavioral trigger stimulus was prevented while maintaining the associated acoustic signals. In another variation of the task, most neurons also responded to a stimulus that was physically identical to the behavioral trigger but to which the animal made no movement. The activity of a few DA neurons (11 of 128) was reduced following presentation of the behavioral trigger stimulus, with latencies comparable to those of activations. The activity of many DA neurons was increased (40 of 128) or reduced (22 of 128) during execution of the forelimb reaching movement. These changes were of a slow and moderate nature, and were minor compared with responses to the behavioral trigger stimulus. About half of movement-related neurons also responded to the behavioral trigger. The activity of a few DA neurons was increased (11 to 128) or reduced (1 to 128) when the food reward reached the mouth. These changes did not occur with spontaneous mouth movements. About half of these neurons also responded to the behavioral trigger.(ABSTRACT TRUNCATED AT 400 WORDS)", "title": "" }, { "docid": "b6f9d5015fddbf92ab44ae6ce2f7d613", "text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.", "title": "" }, { "docid": "af0097bec55577049b08f2bc9e65dd4d", "text": "The recent surge in using social media has created a massive amount of unstructured textual complaints about products and services. However, discovering and quantifying potential product defects from large amounts of unstructured text is a nontrivial task. In this paper, we develop a probabilistic defect model (PDM) that identifies the most critical product issues and corresponding product attributes, simultaneously. We facilitate domain-oriented key attributes (e.g., product model, year of production, defective components, symptoms, etc.) of a product to identify and acquire integral information of defect. We conduct comprehensive evaluations including quantitative evaluations and qualitative evaluations to ensure the quality of discovered information. Experimental results demonstrate that our proposed model outperforms existing unsupervised method (K-Means Clustering), and could find more valuable information. Our research has significant managerial implications for mangers, manufacturers, and policy makers. [Category: Data and Text Mining]", "title": "" }, { "docid": "28cbdb82603c720efba6880034344b94", "text": "An experiment is reported which tests Fazey & Hardy's (1988) catastrophe model of anxiety and performance. Eight experienced basketball players were required to perform a set shooting task, under conditions of high and low cognitive anxiety. On each of these occasions, physiological arousal was manipulated by means of physical work in such a way that subjects were tested with physiological arousal increasing and decreasing. Curve-fitting procedures followed by non-parametric tests of significance confirmed (p less than .002) Fazey & Hardy's hysteresis hypothesis: namely, that the polynomial curves for the increasing vs. decreasing arousal conditions would be horizontally displaced relative to each other in the high cognitive anxiety condition, but superimposed on top of one another in the low cognitive anxiety condition. Other non-parametric procedures showed that subjects' maximum performances were higher, their minimum performances lower, and their critical decrements in performance greater in the high cognitive anxiety condition than in the low cognitive anxiety condition. These results were taken as strong support for Fazey & Hardy's catastrophe model of anxiety and performance. The implications of the model for current theorizing on the anxiety-performance relationship are also discussed.", "title": "" }, { "docid": "aa55e655c7fa8c86d189d03c01d5db87", "text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.", "title": "" }, { "docid": "2dd4a6736fcbd3bbb5b126f3ffcdda10", "text": "Recent research leverages results from the continuous-armed bandit literature to create a reinforcement-learning algorithm for continuous state and action spaces. Initially proposed in a theoretical setting, we provide the first examination of the empirical properties of the algorithm. Through experimentation, we demonstrate the effectiveness of this planning method when coupled with exploration and model learning and show that, in addition to its formal guarantees, the approach is very competitive with other continuous-action reinforcement", "title": "" }, { "docid": "5e24b62458331cf88e9e606ae0b381b6", "text": "People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a \"second-order\" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record", "title": "" }, { "docid": "e48d917c045ce3825e7b2c0fafb96701", "text": "Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study.", "title": "" }, { "docid": "cec9f586803ffc8dc5868f6950967a1f", "text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.", "title": "" }, { "docid": "d46e9d196efd25c7f0bd8dc35f4c9d6d", "text": "Cyber-physical systems (CPSs) are deemed as the key enablers of next generation applications. Needless to say, the design, verification and validation of cyber-physical systems reaches unprecedented levels of complexity, specially due to their sensibility to safety issues. Under this perspective, leveraging architectural descriptions to reason on a CPS seems to be the obvious way to manage its inherent complexity.\n A body of knowledge on architecting CPSs has been proposed in the past years. Still, the trends of research on architecting CPS is unclear. In order to shade some light on the state-of-the art in architecting CPS, this paper presents a preliminary study on the challenges, goals, and solutions reported so far in architecting CPSs.", "title": "" }, { "docid": "fadabf5ba39d455ca59cc9dc0b37f79b", "text": "We propose a speech enhancement algorithm based on single- and multi-microphone processing techniques. The core of the algorithm estimates a time-frequency mask which represents the target speech and use masking-based beamforming to enhance corrupted speech. Specifically, in single-microphone processing, the received signals of a microphone array are treated as individual signals and we estimate a mask for the signal of each microphone using a deep neural network (DNN). With these masks, in multi-microphone processing, we calculate a spatial covariance matrix of noise and steering vector for beamforming. In addition, we propose a masking-based post-filter to further suppress the noise in the output of beamforming. Then, the enhanced speech is sent back to DNN for mask re-estimation. When these steps are iterated for a few times, we obtain the final enhanced speech. The proposed algorithm is evaluated as a frontend for automatic speech recognition (ASR) and achieves a 5.05% average word error rate (WER) on the real environment test set of CHiME-3, outperforming the current best algorithm by 13.34%.", "title": "" }, { "docid": "16fc6497979fd2a3cde2f133792be32e", "text": "Craniofacial duplication (diprosopus) is a rare form of conjoined twins. A case of monocephalus diprosopus with anencephaly, cervicothoracolumbar rachischisis, and duplication of the respiratory tract and upper gastrointestinal tract is reported. The cardiovascular system remained single but the heart showed transposition of the great vessels. We present this case due to its rarity, and compare our pathologic findings with those already reported.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "7ea38ebf37a54aa676d7edad07d43fa5", "text": "Quadruped robot has many advantages over wheeled mobile robots, but it is a discrete system in which joints of each leg has to operate in particular fashion to get the desired locomotion. So, dynamics plays an important role in the operation and control of quadruped robot. Proper conception of the dynamic formulation is must for the complex system like quadruped robot. Here, an attempt is made to generate three dimensional model of a quadruped using the bond graph technique. Bond graph is an efficient tool for system modeling from the physical model itself and various control strategies can be developed very efficiently. A quadruped robot configuration used for analysis is two links legged robot in which upper link is rigid and lower link is compliant. In a lower link, piston and piston rod is sliding inside the cylinder and movement is restricted by the internal hydraulic pressure of the cylinder which will generate compliance in the leg. Simulation of the various gait performed by the quadruped is carried out, which proves the versatility of the three dimensional model generated. The generated model can be used in research of various aspects pertaining to quadruped, the same thing is demonstrated by doing performance evaluation between compliant and rigid legged robot and also between two different gaits like trot gait and amble gait. Keywords—Quadruped robot, Bond graph, Dynamic model", "title": "" }, { "docid": "6c8c21e7cc5a9cc88fa558d7917a81b2", "text": "Recent engineering experiences with the Missile Defense Agency (MDA) Ballistic Missile Defense System (BMDS) highlight the need to analyze the BMDS System of Systems (SoS) including the numerous potential interactions between independently developed elements of the system. The term “interstitials” is used to define the domain of interfaces, interoperability, and integration between constituent systems in an SoS. The authors feel that this domain, at an SoS level, has received insufficient attention within systems engineering literature. The BMDS represents a challenging SoS case study as many of its initial elements were assembled from existing programs of record. The elements tend to perform as designed but their performance measures may not be consistent with the higher level SoS requirements. One of the BMDS challenges is interoperability, to focus the independent elements to interact in a number of ways, either subtle or overt, for a predictable and sustainable national capability. New capabilities desired by national leadership may involve modifications to kill chains, Command and Control (C2) constructs, improved coordination, and performance. These capabilities must be realized through modifications to programs of record and integration across elements of the system that have their own independent programmatic momentum. A challenge of SoS Engineering is to objectively evaluate competing solutions and assess the technical viability of tradeoff options. This paper will present a multifaceted technical approach for integrating a complex, adaptive SoS to achieve a functional capability. Architectural frameworks will be explored, a mathematical technique utilizing graph theory will be introduced, adjuncts to more traditional modeling and simulation techniques such as agent based modeling will be explored, and, finally, newly developed technical and managerial metrics to describe design maturity will be introduced. A theater BMDS construct will be used as a representative set of elements together with the *Author to whom all correspondence should be addressed (e-mail: DLGR_NSWC_G25@navy.mil; DLGR_NSWC_K@Navy.mil; DLGR_NSWC_W@navy.mil; DLGR_NSWC_W@Navy.mil). †Commanding Officer, 6149 Welsh Road, Suite 203, Dahlgren, VA 22448-5130", "title": "" }, { "docid": "5e5ffa7890dd2e16cff9dbc9592f162e", "text": "Spin-transfer torque magnetic memory (STT-MRAM) is currently under intense academic and industrial development, since it features non-volatility, high write and read speed and high endurance. In this work, we show that when used in a non-conventional regime, it can additionally act as a stochastic memristive device, appropriate to implement a “synaptic” function. We introduce basic concepts relating to spin-transfer torque magnetic tunnel junction (STT-MTJ, the STT-MRAM cell) behavior and its possible use to implement learning-capable synapses. Three programming regimes (low, intermediate and high current) are identified and compared. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. Monte Carlo simulations show its robustness to device variations. The simulations also allow comparing system operation when the different programming regimes of STT-MTJs are used. In comparison to the high and low current regimes, the intermediate current regime allows minimization of energy consumption, while retaining a high robustness to device variations. These results open the way for unexplored applications of STT-MTJs in robust, low power, cognitive-type systems.", "title": "" }, { "docid": "1c78424b85b5ffd29e04e34639548bc8", "text": "Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets.", "title": "" }, { "docid": "9736331d674470adbe534503ef452cca", "text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.", "title": "" } ]
scidocsrr
bf68ffd14a35ef6ba78621f8f6d93fb8
Packing Steiner trees
[ { "docid": "48842e5bf95700acf2b1bb18771aeb00", "text": "We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61. We use this algorithm to find better approximation algorithms for the capacitated facility location problem with soft capacities and for a common generalization of the k-median and facility location problems. We also prove a lower bound of 1+2/e on the approximability of the k-median problem. At the end, we present a discussion about the techniques we have used in the analysis of our algorithm, including a computer-aided method for proving bounds on the approximation factor.", "title": "" } ]
[ { "docid": "9c0d65ee42ccfaa291b576568bad59e0", "text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.", "title": "" }, { "docid": "99b25b7187aa4e3ea85a6ce60173c7f8", "text": "Modern advanced analytics applications make use of machine learning techniques and contain multiple steps of domain-specific and general-purpose processing with high resource requirements. We present KeystoneML, a system that captures and optimizes the end-to-end large-scale machine learning applications for high-throughput training in a distributed environment with a high-level API. This approach offers increased ease of use and higher performance over existing systems for large scale learning. We demonstrate the effectiveness of KeystoneML in achieving high quality statistical accuracy and scalable training using real world datasets in several domains.", "title": "" }, { "docid": "fc3beed303b26fb7f58327b34153751d", "text": "Ingestion of ethylene glycol may be an important contributor in patients with metabolic acidosis of unknown cause and subsequent renal failure. Expeditious diagnosis and treatment will limit metabolic toxicity and decrease morbidity and mortality. Ethylene glycol poisoning should be suspected in an intoxicated patient with anion gap acidosis, hypocalcemia, urinary crystals, and nontoxic blood alcohol concentration. Fomepizole is a newer agent with a specific indication for the treatment of ethylene glycol poisoning. Metabolic acidosis is resolved within three hours of initiating therapy. Initiation of fomepizole therapy before the serum creatinine concentration rises can minimize renal impairment. Compared with traditional ethanol treatment, advantages of fomepizole include lack of depression of the central nervous system and hypoglycemia, and easier maintenance of effective plasma levels.", "title": "" }, { "docid": "59daeea2c602a1b1d64bae95185f9505", "text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.", "title": "" }, { "docid": "d6477bab69274263bc208d19d9ec3ec2", "text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.", "title": "" }, { "docid": "a4e1f420dfc3b1b30a58ec3e60288761", "text": "Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.", "title": "" }, { "docid": "bd47faa5acc45c9dca97ad1b5de09de6", "text": "We present a differentiable framework capable of learning a wide variety of compositions of simple policies that we call skills. By recursively composing skills with themselves, we can create hierarchies that display complex behavior. Skill networks are trained to generate skill-state embeddings that are provided as inputs to a trainable composition function, which in turn outputs a policy for the overall task. Our experiments on an environment consisting of multiple collect and evade tasks show that this architecture is able to quickly build complex skills from simpler ones. Furthermore, the learned composition function displays some transfer to unseen combinations of skills, allowing for zero-shot generalizations.", "title": "" }, { "docid": "78007b3276e795d76b692b40c4808c51", "text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "b49e8f14c2c592e8abfed0e64f66bb5e", "text": "Loan portfolio problems have historically been the major cause of bank losses because of inherent risk of possible loan losses (credit risk). The study of Bank Loan Fraud Detection and IT-Based Combat Strategies in Nigeria which focused on analyzing the loan assessment system was carried out purposely to overcome the challenges of high incidence of NonPerforming Loan (NPL) that are currently being experienced as a result of lack of good decision making mechanisms in disbursing loans. NPL has led to failures of some banks in the past, contributed to shareholders losing their investment in the banks and inaccessibility of bank loans to the public. Information Technology (IT) is a critical component in creating value in banking industries. It provides decision makers with an efficient means to store, calculate, and report information about risk, profitability, collateral analysis, and precedent conditions for loan. This results in a quicker response for client and efficient JIBC August 2011, Vol. 16, No.2 2 identification of appropriate risk controls to enable the financial institution realize a profit. In this paper we discussed the values of various applications of information technology in mitigating the problems of loan fraud in Nigeria financial Institutions.", "title": "" }, { "docid": "bd3620816c83fae9b4a5c871927f2b73", "text": "Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy. Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.", "title": "" }, { "docid": "bb0be0730200ae47d9b87d3c6a915008", "text": "Human ESC-derived mesenchymal stem cell (MSC)-conditioned medium (CM) was previously shown to mediate cardioprotection during myocardial ischemia/reperfusion injury through large complexes of 50-100 nm. Here we show that these MSCs secreted 50- to 100-nm particles. These particles could be visualized by electron microscopy and were shown to be phospholipid vesicles consisting of cholesterol, sphingomyelin, and phosphatidylcholine. They contained coimmunoprecipitating exosome-associated proteins, e.g., CD81, CD9, and Alix. These particles were purified as a homogeneous population of particles with a hydrodynamic radius of 55-65 nm by size-exclusion fractionation on a HPLC. Together these observations indicated that these particles are exosomes. These purified exosomes reduced infarct size in a mouse model of myocardial ischemia/reperfusion injury. Therefore, MSC mediated its cardioprotective paracrine effect by secreting exosomes. This novel role of exosomes highlights a new perspective into intercellular mediation of tissue injury and repair, and engenders novel approaches to the development of biologics for tissue repair.", "title": "" }, { "docid": "fc0470776583df8b25114abc8709b045", "text": "Certified Registered Nurse Anesthetists (CRNAs) have been providing anesthesia care in the United States (US) for nearly 150 years. Historically, anesthesia care for surgical patients was mainly provided by trained nurses under the supervision of surgeons until the establishment of anesthesiology as a medical specialty in the US. Currently, all 50 US states utilize CRNAs to perform various kinds of anesthesia care, either under the medical supervision of anesthesiologists in most states, or independently without medical supervision in 16 states; the latter has become an on-going source of conflict between anesthesiologists and CRNAs. Understanding the history and current conditions of anesthesia practice in the US is crucial for countries in which the shortage of anesthesia care providers has become a national issue.", "title": "" }, { "docid": "47d7ba349d6b1d2f1024e8eed003b40b", "text": "Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undistorted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.", "title": "" }, { "docid": "a0172830d69b0a386aa291235e5837a0", "text": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms – such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) – requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-ofthe-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM’s ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.", "title": "" }, { "docid": "64e5cad1b64f1412b406adddc98cd421", "text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.", "title": "" }, { "docid": "1abef5c69eab484db382cdc2a2a1a73f", "text": "Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.", "title": "" }, { "docid": "9d110d71fe2d5801627cf7a530e8f769", "text": "The approach called Topological Functioning Modeling for Model Driven Architecture (TFM4MDA) uses Topological Functioning Model (TFM) as a formal problem domain model. TFM is used as a computation independent model (CIM) within Model Driven Architecture (MDA). Following the recommendations of MDA a CIM must be transformed to a platform independent model (PIM). The object of this research is the construction of a UML class diagram on PIM level in conformity with the TFM. Nowadays this transformation is executed manually. Manual creation of models is time-consuming; also a probability exists, that a user (e.g., system architect) will make a mistake during the execution. Time investment and risk of making mistakes increase costs and reduce efficiency of TFM4MDA approach. That is why automation of this process is useful. The paper presents an algorithm for the transformation. The algorithm is written in pseudocode and can be implemented as a tool, thus improving the TFM4MDA approach.", "title": "" }, { "docid": "b84f84961c655ea98920513bf3074241", "text": "This study took place in Sakarya Anatolian High School, Profession High School and Vocational High School for Industry (SAPHPHVHfI) where a flexible and nonroutine organising style was tried to be realized. The management style was initiated on a study group at first, but then it helped the group to come out as natural team spontaneously. The main purpose of the study is to make an evaluation on five teams within the school where team (based) management has been experienced in accordance with Belbin (1981)’s team roles theory [9]. The study group of the research consists of 28 people. The data was obtained from observations, interviews and the answers given to the questions in Belbin Team Roles Self Perception Inventory (BTRSPI). Some of the findings of the study are; (1) There was no paralellism between the team and functional roles of the members of the mentioned five team, (2) The team roles were distributed equaly balanced but it was also found that most of the roles were played by the members who were less inclined to play it, (3) The there were very few members who played plant role within the teams and there were nearly no one who were inclined to play leader role.", "title": "" }, { "docid": "8cb73e631ab6957bb9866ead9670441b", "text": "This paper explores a robust μ-synthesis control scheme for structural resonance vibration suppression of high-speed rotor systems supported by active magnetic bearings (AMBs) in the magnetically suspended double-gimbal control moment gyro (MSDGCMG). The derivation of a nominal linearized model about an operating point was presented. Sine sweep test was conducted on each component of AMB control system to obtain parameter variations and high-frequency unmodeled dynamics, including the structural resonance modes. A fictitious uncertainty block was introduced to represent the performance requirements for the augmented system. Finally, D-K iteration procedure was employed to solve the robust μ-controller. Rotor run-up experiments on the originally developed MSDGCMG prototype show that the designed μ-controller has a good performance for vibration rejection of structural resonance mode with the excitation of coupling torques. Further investigations indicate that the proposed method can also ensure the robust stability and performance of high-speed rotor system subject to the reaction of a large gyro torque.", "title": "" } ]
scidocsrr
4e2533d113e746a4f45e0ea9722afdec
Using Blockchain and smart contracts for secure data provenance management
[ { "docid": "779c0081af334a597f6ee6942d7e7240", "text": "We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.", "title": "" }, { "docid": "f74c69da9a5ccbca363dbd79d3132ae9", "text": "Cloud data provenance is metadata that records the history of the creation and operations performed on a cloud data object. Secure data provenance is crucial for data accountability, forensics and privacy. In this paper, we propose a decentralized and trusted cloud data provenance architecture using blockchain technology. Blockchain-based data provenance can provide tamper-proof records, enable the transparency of data accountability in the cloud, and help to enhance the privacy and availability of the provenance data. We make use of the cloud storage scenario and choose the cloud file as a data unit to detect user operations for collecting provenance data. We design and implement ProvChain, an architecture to collect and verify cloud data provenance, by embedding the provenance data into blockchain transactions. ProvChain operates mainly in three phases: (1) provenance data collection, (2) provenance data storage, and (3) provenance data validation. Results from performance evaluation demonstrate that ProvChain provides security features including tamper-proof provenance, user privacy and reliability with low overhead for the cloud storage applications.", "title": "" } ]
[ { "docid": "dee76f07eb39e33e59608a2544215c0a", "text": "We ask, and answer, the question of what’s computable by Turing machines equipped with time travel into the past: that is, closed timelike curves or CTCs (with no bound on their size). We focus on a model for CTCs due to Deutsch, which imposes a probabilistic consistency condition to avoid grandfather paradoxes. Our main result is that computers with CTCs can solve exactly the problems that are Turing-reducible to the halting problem, and that this is true whether we consider classical or quantum computers. Previous work, by Aaronson and Watrous, studied CTC computers with a polynomial size restriction, and showed that they solve exactly the problems in PSPACE, again in both the classical and quantum cases. Compared to the complexity setting, the main novelty of the computability setting is that not all CTCs have fixed-points, even probabilistically. Despite this, we show that the CTCs that do have fixed-points suffice to solve the halting problem, by considering fixed-point distributions involving infinite geometric series. The tricky part is to show that even quantum computers with CTCs can be simulated using a Halt oracle. For that, we need the Riesz representation theorem from functional analysis, among other tools. We also study an alternative model of CTCs, due to Lloyd et al., which uses postselection to “simulate” a consistency condition, and which yields BPPpath in the classical case or PP in the quantum case when subject to a polynomial size restriction. With no size limit, we show that postselected CTCs yield only the computable languages if we impose a certain finiteness condition, or all languages nonadaptively reducible to the halting problem if we don’t.", "title": "" }, { "docid": "aada9722cb54130151657a84417d14a1", "text": "Classical theories of sensory processing view the brain as a passive, stimulus-driven device. By contrast, more recent approaches emphasize the constructive nature of perception, viewing it as an active and highly selective process. Indeed, there is ample evidence that the processing of stimuli is controlled by top–down influences that strongly shape the intrinsic dynamics of thalamocortical networks and constantly create predictions about forthcoming sensory events. We discuss recent experiments indicating that such predictions might be embodied in the temporal structure of both stimulus-evoked and ongoing activity, and that synchronous oscillations are particularly important in this process. Coherence among subthreshold membrane potential fluctuations could be exploited to express selective functional relationships during states of expectancy or attention, and these dynamic patterns could allow the grouping and selection of distributed neuronal responses for further processing.", "title": "" }, { "docid": "a4368fed8852c1b92a50e49b18b1c8a5", "text": "This paper reports on the analysis, design and characterization of a 30 GHz fully differential variable gain amplifier for ultra-wideband radar systems. The circuit consists of a variable gain differential stage, which is fed by two cascaded emitter followers. Capacitive degeneration and inductive peaking are used to enhance bandwidth. The maximum differential gain is 11.5 dB with plusmn1.5 dB gain flatness in the desired frequency range. The amplifier gain can be regulated from 0 dB up to 11.5 dB. The circuit exhibits an output 1 dB compression point of 12 dBm. The measured differential output voltage swing is 1.23 Vpp. The 0.75 mm2 broadband amplifier consumes 560 mW at a supply voltage of plusmn3.3 V. It is manufactured in a low-cost 0.25 mum SiGe BiCMOS technology with a cut-off frequency of 75 GHz. The experimental results agree very well with the simulated response. A figure of merit has been proposed for comparing the amplifier performance to previously reported works.", "title": "" }, { "docid": "bb8d59a0aabc0995f42bd153bfb8f67b", "text": "Abnormal release of Ca from sarcoplasmic reticulum (SR) via the cardiac ryanodine receptor (RyR2) may contribute to contractile dysfunction and arrhythmogenesis in heart failure (HF). We previously demonstrated decreased Ca transient amplitude and SR Ca load associated with increased Na/Ca exchanger expression and enhanced diastolic SR Ca leak in an arrhythmogenic rabbit model of nonischemic HF. Here we assessed expression and phosphorylation status of key Ca handling proteins and measured SR Ca leak in control and HF rabbit myocytes. With HF, expression of RyR2 and FK-506 binding protein 12.6 (FKBP12.6) were reduced, whereas inositol trisphosphate receptor (type 2) and Ca/calmodulin-dependent protein kinase II (CaMKII) expression were increased 50% to 100%. The RyR2 complex included more CaMKII (which was more activated) but less calmodulin, FKBP12.6, and phosphatases 1 and 2A. The RyR2 was more highly phosphorylated by both protein kinase A (PKA) and CaMKII. Total phospholamban phosphorylation was unaltered, although it was reduced at the PKA site and increased at the CaMKII site. SR Ca leak in intact HF myocytes (which is higher than in control) was reduced by inhibition of CaMKII but was unaltered by PKA inhibition. CaMKII inhibition also increased SR Ca content in HF myocytes. Our results suggest that CaMKII-dependent phosphorylation of RyR2 is involved in enhanced SR diastolic Ca leak and reduced SR Ca load in HF, and may thus contribute to arrhythmias and contractile dysfunction in HF.", "title": "" }, { "docid": "9dec25eadfc6835512487abb6ff061ba", "text": "We consider the problem of how to enable a live video streaming service to vehicles in motion. In such applications, the video source can be a typical video server or vehicles with appropriate capability, while the video receivers are vehicles that are driving on the road. An infrastructure-based approach relies on strategically deployed base stations and video servers to forward video data to nearby vehicles. While this approach can provide a streaming video service to certain vehicles, it suffers from high base station deployment and maintenance cost. In this paper, we propose V3, an architecture to provide a live video streaming service to driving vehicles through vehicle-to-vehicle (V2V) networks. We argue that this solution is practical with the advance of wireless ad-hoc network techniques. With ample engine power, powerful computing capability and considerable data storage that a vehicle can provide, it is reasonable to support data-intensive video streaming service. On the other hand, V2V video streaming can be challenging because: 1) the V2V network may be persistently partitioned, and 2) the video sources are mobile and transient. V3 addresses these challenges by incorporating a novel signaling mechanism to continuously trigger video sources to send video data back to receivers. It also adopts a store-carry-and-forward approach to transmit video data in a partitioned network environment. Several algorithms are proposed to balance the video transmission delay and bandwidth overheads Simulation experiments demonstrate the feasibility of supporting vehicle-to-vehicle live video streaming with acceptable performance.", "title": "" }, { "docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "8f9b22630f9bc0b86b8e51776d47de6e", "text": "HTTP is becoming the most preferred channel for command and control (C&C) communication of botnets. One of the main reasons is that it is very easy to hide the C&C traffic in the massive amount of browser generated Web traffic. However, detecting these HTTP-based C&C packets which constitute only a minuscule portion of the overall everyday HTTP traffic is a formidable task. In this paper, we present an anomaly detection based approach to detect HTTP-based C&C traffic using statistical features based on client generated HTTP request packets and DNS server generated response packets. We use three different unsupervised anomaly detection techniques to isolate suspicious communications that have a high probability of being part of a botnet's C&C communication. Results indicate that our method can achieve more than 90% detection rate while maintaining a reasonably low false positive rate.", "title": "" }, { "docid": "d34e4224d30a367e0254ad4ba09425a7", "text": "In this chapter, the intuitive link between balanced, healthy, and supportive psychosocial work environments and a variety of vitally important patient, nurse, and organizational outcomes is discussed with reference to a number of clearly defined and well-researched concepts. Among the essential concepts that ground the rest of the book is the notion of a bundle of factors that provide a context for nurses’ work and are known collectively as the practice environment. Landmark studies that focused specifically on nurses’ experiences of their work environments in exemplary hospitals examined so-called Magnet hospitals, leading to a framework that describes the practice environment and its linkage with professional wellbeing, occupational stress, and quality of practice and productivity. Many ideas and models have obvious connections to the notion of practice environment such as Job Demand– Control–Support model, worklife dimensions and burnout, concepts related to burnout such as compassion fatigue, and work engagement as a mirror image concept of burnout, as well as notions of empowerment and authentic leadership. These concepts have been chosen for discussion here based on critical masses of evidence pointing to their usefulness in healthcare management and specifically in the management of nursing services. Together all of these concepts and supporting research and scholarship speak to a common point: intentional leadership approaches, grounded in a comprehensive understanding of nurses’ psychosocial experiences of their work, are essential to nurses’ abilities to respond to complex patients’ needs in rapidly changing healthcare contexts and socioeconomic conditions.", "title": "" }, { "docid": "51a1991c8dc3f09962bcaf0a997266eb", "text": "Modern prosthetists have a wide selec­ tion of prosthetic knees to fulfill many in­ dividual specifications. The names \"fric­ tion,\" \"safety,\" \"lock,\" \"hydraulic,\" etc. quickly recall particular classes of single axis knees. For these single axis knees, the name (friction, safety, etc.) simply states a unique feature which defines the major mechanical advantage of that class of knees. Polycentric knees, however, may pre­ sent the prosthetist with confusion. This confusion results from the fact that the term \"polycentric\" does not define any specific function. Secondly, these knees require more than a simple knowledge of mechanics to fully understand their func­ tions. This paper will examine one category of polycentric knees which are known as \"four bar linkages.\" Simple methods for evaluating these knees will be presented. These evaluating methods will enable the prosthetist to determine the major mechanical or cosmetic advantage of most four bar designs. The prosthetist will also learn comparative methods of evaluating the efficiency of a particular four bar de­ sign in attaining its specific mechanical or cosmetic goals. This skill is extremely im­ portant since each four bar design is unique in its operation. Specifically, each four bar knee has been designed to en­ hance individual characteristics such as safety, cosmesis, energy conservation and/or swing phase motion.", "title": "" }, { "docid": "2c92d42311f9708b7cb40f34551315e0", "text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.", "title": "" }, { "docid": "2671e14771d3834f64d838f57c55a3a5", "text": "The Internet of Things, or the IoT, is an emerging, disruptive technology that enables physical devices to communicate across disparate networks. IP has been the de facto standard for seamless interconnectivity in the traditional Internet; and piggybacking on the success of IP, 6LoWPAN has been the first standardized technology to realize it for networks of resource-constrained devices. In the recent past Bluetooth Low Energy (BLE) a.k.a Bluetooth Smart a subset of the Bluetooth v4.0 and the latest v4.2 stack, has surfaced as an appealing alternative, with many competing advantages over available low-power communication technologies in the IoT space such as IEEE 802.15.4. However, BLE is a closed standard and lacks open hardware and firmware support, something that hinders innovation and development in this field. In this article, we aim to overcome some of the constraints in BLE’s core building blocks by making three contributions: first, we present the design of a new open hardware platform for BLE; second, we provide a Contiki O.S. port for the new platform; and third, we identify research challenges and opportunities in 6LoWPAN-connected Bluetooth Smart. We believe that the knowledge and insights will facilitate IoT innovations based on this promising technology. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b48494c3b60bef946a6d668e158b0bb1", "text": "This biannual review covers the time period from January 2002 to January 2004 and is written in continuation of previous reviews (A1, A2). An electronic search in SciFinder and MedLine resulted in 532 hits. Since the number of citations in this review is limited, a stringent selection had to be made. Priority was given to fiberoptic sensors (FOS) for defined chemical, environmental, and biochemical significance and to new schemes and materials. The review does not include (a) FOS that obviously have been rediscovered; (b) FOS for nonchemical species such as temperature, current and voltage, stress, strain, and displacement, for structural integrity (e.g., of constructions), liquid level, and radiation; and (c) FOS for monitoring purely technical processes such as injection molding, extrusion, or oil drilling, even though these are important applications of optical fiber technology. Fiber optics serve analytical sciences in several ways. First, they enable optical spectroscopy to be performed on sites inaccessible to conventional spectroscopy, over large distances, or even on several spots along the fiber. Second, fiber optics, in being waveguides, enable less common methods of interrogation, in particular evanescent wave spectroscopy. Fibers are available now with transmissions over a wide spectral range. Major fields of applications are in medical and chemical analysis, molecular biotechnology, marine and environmental analysis, industrial production monitoring and bioprocess control, and the automotive industry. Note: In this article, sensing refers to a continuous process, while probing refers to single-shot testing. FOS are based on either direct or indirect (indicator-based) sensing schemes. In the first, the intrinsic optical properties of the analyte are measured, for example its refractive index, absorption, or emission. In the second, the color or fluorescence of an immobilized indicator, label, or any other optically detectable bioprobe is monitored. Active current areas of research include advanced methods of interrogation such as time-resolved or spatially resolved spectroscopy, evanescent wave and laser-assisted spectroscopy, surface plasmon resonance, and multidimensional data acquisition. In recent years, fiber bundles also have been employed for purposes of imaging, for biosensor arrays (along with encoding), or as arrays of nonspecific sensors whose individual signals may be processed via artificial neural networks. This review is divided into sections on books and reviews (A), specific sensors for gases and vapors (B), ions and salinity (C), miscellaneous inorganic and organic chemical species (D), and biosensors (E), followed by sections on application-oriented sensor types (F), new sensing schemes (G), and new sensor materials (H), respectively.", "title": "" }, { "docid": "8231e10912b42e0f8ac90392e6e0efbb", "text": "Zobrist Hashing: An Efficient Work Distribution Method for Parallel Best-First Search Yuu Jinnai, Alex Fukunaga VIS: Text and Vision Oral Presentations 1326 SentiCap: Generating Image Descriptions with Sentiments Alexander Patrick Mathews, Lexing Xie, Xuming He 1950 Reading Scene Text in Deep Convolutional Sequences Pan He, Weilin Huang, Yu Qiao, Chen Change Loy, Xiaoou Tang 1247 Creating Images by Learning Image Semantics Using Vector Space Models Derrall Heath, Dan Ventura Poster Spotlight Talks 655 Towards Domain Adaptive Vehicle Detection in Satellite Image by Supervised SuperResolution Transfer Liujuan Cao, Rongrong Ji, Cheng Wang, Jonathan Li 499 Transductive Zero-Shot Recognition via Shared Model Space Learning Yuchen Guo, Guiguang Ding, Xiaoming Jin, Jianmin Wang 1255 Exploiting View-Specific Appearance Similarities Across Classes for Zero-shot Pose Prediction: A Metric Learning Approach Alina Kuznetsova, Sung Ju Hwang, Bodo Rosenhahn, Leonid Sigal NLP: Topic Flow Oral Presentations 744 Topical Analysis of Interactions Between News and Social Media Ting Hua, Yue Ning, Feng Chen, Chang-Tien Lu, Naren Ramakrishnan 1561 Tracking Idea Flows between Social Groups Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, Yangqiu Song 1201 Modeling Evolving Relationships Between Characters in Literary Novels Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, Chris Dyer Poster Spotlight Talks 405 Identifying Search", "title": "" }, { "docid": "741a897b87cc76d68f5400974eee6b32", "text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.", "title": "" }, { "docid": "1be971362e43b07184e04ab249f79ec6", "text": "Purpose – The purpose of this study is to develop a framework for evaluating business-IT alignment. Specifically, the authors emphasize internal business-IT alignment between business and IS groups, which is a typical setting in recent boundary-less, networked business environments. Design/methodology/approach – Based on the previous studies, a socio-technical approach was developed to explain how the functional integration in the business-IT alignment process could be accomplished in collaborative environments. The study investigates the relationship among social alignment, technical alignment, IS effectiveness, and business performance. Findings – The results indicated that alignment between business and IS groups increased IS effectiveness and business performance. Business-IT alignment resulting from socio-technical arrangements in firms’ infrastructure has positive impacts on business performance. Research limitations/implications – This study is limited by control issues in terms of the impact of the confounding variables on business performance. Future studies need to validate the research model across industries. The study results imply that business-IT alignment is a multidimensional concept that includes social and technical activities explaining the way people and information technology institutionalize business value. Originality/value – By establishing a socio-technical framework of business-IT alignment, this study proposes a conceptual framework for business-IT alignment that accounts for not only improved technical performance, but also improved human performance as well. This study emphasizes the importance of addressing internal socio-technical collaboration in modern business environments.", "title": "" }, { "docid": "b3f423e513c543ecc9fe7003ff9880ea", "text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.", "title": "" }, { "docid": "016112b04486159e02c7e356b0ff63b9", "text": "The contribution of this paper is two-fold. First, we presen t indexing by Latent Dirichlet Allocation(LDI), an automatic document indexing method with a probabilistic concept search. The pr obability distributions in LDI utilizes those in Latent Dir ichlet Allocation (LDA), which is a generative topic model that has been pr eviously used in applications for document indexing tasks. However, those ad hoc applications, or their variants with smoothing techniques as prompted by previous studies in LDA-based lan guage modeling, would result in unsatisfactory performance as th e terms in documents may not properly reflect concept space. T o improve the performances, we introduce a new definition of docu ment probability vectors in the context of LDA and present a n ovel scheme for automatic document indexing based on it. Second, we propose an ensemble model (EnM) for document indexing. Th e EnM combines basis indexing models by assigning di fferent weights and tries to uncover the optimal weights with w hich the mean average precision (MAP) is maximized. To solve the optimiza tion problem, we propose three algorithms, EnM.B, EnM,CD an d EnM.PCD. EnM.B is derived based on the boosting method, EnM. CD the coordinate descent method, and EnM.PCD the parallel property of the EnM.CD. The results of our computational exp eriment on a benchmark data set indicate that both the propos ed approaches are viable options in the document indexing task s. c © 2013 Published by Elsevier Ltd.", "title": "" }, { "docid": "f8836ddc384c799d9264b8ea43f9685a", "text": "Pattern matching has proved an extremely powerful and durable notion in functional programming. This paper contributes a new programming notation for type theory which elaborates the notion in various ways. First, as is by now quite well-known in the type theory community, definition by pattern matching becomes a more discriminating tool in the presence of dependent types, since it refines the explanation of types as well as values. This becomes all the more true in the presence of the rich class of datatypes known as inductive families (Dybjer, 1991). Secondly, as proposed by Peyton Jones (1997) for Haskell, and independently rediscovered by us, subsidiary case analyses on the results of intermediate computations, which commonly take place on the right-hand side of definitions by pattern matching, should rather be handled on the left. In simply-typed languages, this subsumes the trivial case of Boolean guards; in our setting it becomes yet more powerful. Thirdly, elementary pattern matching decompositions have a well-defined interface given by a dependent type; they correspond to the statement of an induction principle for the datatype. More general, user-definable decompositions may be defined which also have types of the same general form. Elementary pattern matching may therefore be recast in abstract form, with a semantics given by translation. Such abstract decompositions of data generalize Wadler’s (1987) notion of ‘view’. The programmer wishing to introduce a new view of a type T , and exploit it directly in pattern matching, may do so via a standard programming idiom. The type theorist, looking through the Curry–Howard lens, may see this as proving a theorem, one which establishes the validity of a new induction principle for T . We develop enough syntax and semantics to account for this high-level style of programming in dependent type theory. We close with the development of a typechecker for the simply-typed lambda calculus, which furnishes a view of raw terms as either being well-typed, or containing an error. The implementation of this view is ipso facto a proof that typechecking is decidable.", "title": "" }, { "docid": "3dfee4e741b5610571dbc2734c427350", "text": "Anomaly detection in crowd scene is very important because of more concern with people safety in public place. This paper presents an approach to automatically detect abnormal behavior in crowd scene. For this purpose, instead of tracking every person, KLT corners are extracted as feature points to represent moving objects and tracked by optical flow technique to generate motion vectors, which are used to describe motion. We divide whole frame into small blocks, and motion pattern in each block is encoded by the distribution of motion vectors in it. Similar motion patterns are clustered into pattern model in an unsupervised way, and we classify motion pattern into normal or abnormal group according to the deviation between motion pattern and trained model. The results on abnormal events detection in real video demonstrate the effectiveness of the approach.", "title": "" }, { "docid": "971e39e4b99695f249ec1d367b5044f0", "text": "Research on curiosity has undergone 2 waves of intense activity. The 1st, in the 1960s, focused mainly on curiosity's psychological underpinnings. The 2nd, in the 1970s and 1980s, was characterized by attempts to measure curiosity and assess its dimensionality. This article reviews these contributions with a concentration on the 1st wave. It is argued that theoretical accounts of curiosity proposed during the 1st period fell short in 2 areas: They did not offer an adequate explanation for why people voluntarily seek out curiosity, and they failed to delineate situational determinants of curiosity. Furthermore, these accounts did not draw attention to, and thus did not explain, certain salient characteristics of curiosity: its intensity, transience, association with impulsivity, and tendency to disappoint when satisfied. A new account of curiosity is offered that attempts to address these shortcomings. The new account interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding.", "title": "" } ]
scidocsrr
17a38af29cef5fdb4432d14196046bcd
Research on leadership in a cross-cultural context : Making progress , and raising new questions
[ { "docid": "95c14d030cfaca90cf8e97213c77595a", "text": "Add list of 170 authors and their institutions here. * The authors are indebted to Markus Hauser, University of Zurich, for his thoughtful comments and suggestions relevant to this monograph. 2 ABSTRACT GLOBE is both a research program and a social entity. The GLOBE social entity is a network of 170 social scientists and management scholars from 61 cultures throughout the world, working in a coordinated long-term effort to examine the interrelationships between societal culture, organizational culture and practices, and organizational leadership. The meta-goal of the Global Leadership and Organizational Effectiveness (GLOBE) Research Program is to develop an empirically based theory to describe, understand, and predict the impact of cultural variables on leadership and organizational processes and the effectiveness of these processes. This monograph presents a description of the GLOBE research program and some initial empirical findings resulting from GLOBE research. A central question in this part of the research concerns the extent to which specific leadership attributes and behaviors are universally endorsed as contributing to effective leadership and the extent to which the endorsement of leader attributes and behaviors is culturally contingent. We identified six global leadership dimensions of culturally endorsed implicit theories of leadership (CLTs). Preliminary evidence indicates that these dimensions are significantly correlated with isomorphic dimensions of societal and organizational culture. These findings are consistent with the hypothesis that selected cultural differences strongly influence important ways in which people think about leaders and norms concerning the status, influence, and privileges granted to leaders. The hypothesis that charismatic/value-based leadership would be universally endorsed is strongly supported. Team-oriented leadership is strongly correlated with charismatic/value-based leadership, and also universally endorsed. Humane and participative leadership dimensions are nearly universally endorsed. The endorsement of the remaining global leadership dimensions-self-protective and autonomous leadership vary by culture. 3 We identified 21 specific leader attributes and behaviors that are universally viewed as contributing to leadership effectiveness. Eleven of the specific leader characteristics composing the global charismatic/value-based leadership dimension were among these 21 attributes. Eight specific leader characteristics were universally viewed as impediments to leader effectiveness. We also identified 35 specific leader characteristics that are viewed as contributors in some cultures and impediments in other cultures. We present these, as well as other findings, in more detail in this monograph. A particular strength of the GLOBE research design is the combination of quantitative and qualitative data. Elimination of common method and common source variance is also …", "title": "" }, { "docid": "a5ff7c80c36f354889e3f48e94052195", "text": "A meta-analysis examined emotion recognition within and across cultures. Emotions were universally recognized at better-than-chance levels. Accuracy was higher when emotions were both expressed and recognized by members of the same national, ethnic, or regional group, suggesting an in-group advantage. This advantage was smaller for cultural groups with greater exposure to one another, measured in terms of living in the same nation, physical proximity, and telephone communication. Majority group members were poorer at judging minority group members than the reverse. Cross-cultural accuracy was lower in studies that used a balanced research design, and higher in studies that used imitation rather than posed or spontaneous emotional expressions. Attributes of study design appeared not to moderate the size of the in-group advantage.", "title": "" } ]
[ { "docid": "ce305309d82e2d2a3177852c0bb08105", "text": "BACKGROUND\nEmpathizing is a specific component of social cognition. Empathizing is also specifically impaired in autism spectrum condition (ASC). These are two dimensions, measurable using the Empathy Quotient (EQ) and the Autism Spectrum Quotient (AQ). ASC also involves strong systemizing, a dimension measured using the Systemizing Quotient (SQ). The present study examined the relationship between the EQ, AQ and SQ. The EQ and SQ have been used previously to test for sex differences in 5 'brain types' (Types S, E, B and extremes of Type S or E). Finally, people with ASC have been conceptualized as an extreme of the male brain.\n\n\nMETHOD\nWe revised the SQ to avoid a traditionalist bias, thus producing the SQ-Revised (SQ-R). AQ and EQ were not modified. All 3 were administered online.\n\n\nSAMPLE\nStudents (723 males, 1038 females) were compared to a group of adults with ASC group (69 males, 56 females).\n\n\nAIMS\n(1) To report scores from the SQ-R. (2) To test for SQ-R differences among students in the sciences vs. humanities. (3) To test if AQ can be predicted from EQ and SQ-R scores. (4) To test for sex differences on each of these in a typical sample, and for the absence of a sex difference in a sample with ASC if both males and females with ASC are hyper-masculinized. (5) To report percentages of males, females and people with an ASC who show each brain type.\n\n\nRESULTS\nAQ score was successfully predicted from EQ and SQ-R scores. In the typical group, males scored significantly higher than females on the AQ and SQ-R, and lower on the EQ. The ASC group scored higher than sex-matched controls on the SQ-R, and showed no sex differences on any of the 3 measures. More than twice as many typical males as females were Type S, and more than twice as many typical females as males were Type E. The majority of adults with ASC were Extreme Type S, compared to 5% of typical males and 0.9% of typical females. The EQ had a weak negative correlation with the SQ-R.\n\n\nDISCUSSION\nEmpathizing is largely but not completely independent of systemizing. The weak but significant negative correlation may indicate a trade-off between them. ASC involves impaired empathizing alongside intact or superior systemizing. Future work should investigate the biological basis of these dimensions, and the small trade-off between them.", "title": "" }, { "docid": "697580dda38c9847e9ad7c6a14ad6cd0", "text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.", "title": "" }, { "docid": "6a602e4f48c0eb66161bce46d53f0409", "text": "In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance", "title": "" }, { "docid": "bedbcad4da43f0e3110ec6fbfd8b72c7", "text": "Energy conservation is one of the serious problems faced by WSN as the sensor nodes have limited battery power and are expected to perform data aggregation and actuation functions in addition to sensing data. Literature has plenty of solutions proposed to reduce energy consumption and usage. With the recent upcoming technology of introducing network programmability that centralizes network management tasks using software defined architecture (SDN), network trafficking is a prominent domain for applicability of SDN. Inherent traffic issues in WSN like data forwarding, aggregation of the data, path break and energy consumption can be efficiently handled by SDN, which provides a platform in which the data plane and the control plane are separated. By integrating SDN in WSN, the sensor nodes perform only forwarding and don't take any routing decision, due to which energy usage will be reduced. We propose a general framework for a software-defined wireless sensor network where the controller will be implemented at the base station, centre nodes in the cluster acts as switches and communication between the switch and the controller is via OpenFlow protocol. We realize the energy saving in the proposed architecture with the results obtained using NS2 and mininet emulator environments.", "title": "" }, { "docid": "2e389715d9beb1bc7c9ab06131abc67a", "text": "Digital forensic science is very much still in its infancy, but is becoming increasingly invaluable to investigators. A popular area for research is seeking a standard methodology to make the digital forensic process accurate, robust, and efficient. The first digital forensic process model proposed contains four steps: Acquisition, Identification, Evaluation and Admission. Since then, numerous process models have been proposed to explain the steps of identifying, acquiring, analysing, storage, and reporting on the evidence obtained from various digital devices. In recent years, an increasing number of more sophisticated process models have been proposed. These models attempt to speed up the entire investigative process or solve various of problems commonly encountered in the forensic investigation. In the last decade, cloud computing has emerged as a disruptive technological concept, and most leading enterprises such as IBM, Amazon, Google, and Microsoft have set up their own cloud-based services. In the field of digital forensic investigation, moving to a cloud-based evidence processing model would be extremely beneficial and preliminary attempts have been made in its implementation. Moving towards a Digital Forensics as a Service model would not only expedite the investigative process, but can also result in significant cost savings – freeing up digital forensic experts and law enforcement personnel to progress their caseload. This paper aims to evaluate the applicability of existing digital forensic process models and analyse how each of these might apply to a cloudbased evidence processing paradigm.", "title": "" }, { "docid": "b13a03598044db36ecf4634317071b34", "text": "Space Religion Encryption Sport Science space god encryption player science satellite atheism device hall theory april exist technology defensive scientific sequence atheist protect team universe launch moral americans average experiment president existence chip career observation station marriage use league evidence radar system privacy play exist training parent industry bob god committee murder enforcement year mistake", "title": "" }, { "docid": "2c667b86fffdcb69e35a21795fc0e3bd", "text": "We compiled details of over 8000 assessments of protected area management effectiveness across the world and developed a method for analyzing results across diverse assessment methodologies and indicators. Data was compiled and analyzed for over 4000 of these sites. Management of these protected areas varied from weak to effective, with about 40% showing major deficiencies. About 14% of the surveyed areas showed significant deficiencies across many management effectiveness indicators and hence lacked basic requirements to operate effectively. Strongest management factors recorded on average related to establishment of protected areas (legal establishment, design, legislation and boundary marking) and to effectiveness of governance; while the weakest aspects of management included community benefit programs, resourcing (funding reliability and adequacy, staff numbers and facility and equipment maintenance) and management effectiveness evaluation. Estimations of management outcomes, including both environmental values conservation and impact on communities, were positive. We conclude that in spite of inadequate funding and management process, there are indications that protected areas are contributing to biodiversity conservation and community well-being.", "title": "" }, { "docid": "733e379ecaab79ac328f55ccc2384b69", "text": "Introduction Since Beijing 1995, gender mainstreaming has heralded the beginning of a renewed effort to address what is seen as one of the roots of gender inequality: the genderedness of systems, procedures and organizations. In the definition of the Council of Europe, gender mainstreaming is the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policymaking. All member states and some candidate states of the European Union have started to implement gender mainstreaming. The 1997 Treaty of Amsterdam places equality between women and men among the explicit tasks of the European Union and obliges the EU to promote gender equality in all its tasks and activities. The Gender Mainstreaming approach that has been legitimated by this Treaty is backed by legislation and by positive action in favour of women (or the “under-represented sex”). Gender equality policies have not only been part and parcel of modernising action in the European Union, but can be expected to continue to be so (Rossili 2000). With regard to gender inequality, the EU has both a formal EU problem definition at the present time, and a formalised set of EU strategies. Problems in the implementation of gender equality policies abound, at both national and EU level. To give just one example, it took the Netherlands – usually very supportive of the EU –14 years to implement article 119 on Equal Pay (Van der Vleuten 2001). Moreover, it has been documented that overall EU action has run counter to its goal of gender equality. Overall EU action has weakened women’s social rights more seriously than men’s (Rossili 2000). The introduction of Gender Mainstreaming, the incorporation of gender and women’s concerns in all regular policymaking is meant to address precisely this problem of a contradiction between specific gender policies and regular EU policies. Yet, in the case of the Structural Funds, for instance, Gender Mainstreaming has been used to further reduce existing funds and incentives for gender equality (Rossili 2000). Against this backdrop, this paper will present an approach at studying divergences in policy frames around gender equality as one of the elements connected to implementation problems: the MAGEEQ project.", "title": "" }, { "docid": "c4d204b8ceda86e9d8e4ca56214f0ba3", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "65b986cbfe1c3668b0cdea4321e4921e", "text": "Once a bug in software is reported, developers have to determine which source files are related to the bug. This process is referred to as bug localization, and an automatic way of bug localization is important to improve developers' productivity. This paper proposes an approach called DrewBL to efficiently localize faulty files for a given bug report using a natural language processing tool, word2vec. In DrewBL, we first build a vector space model named semantic-VSM which represents a distributed representation of words in the bug report and source code files and next compute the relevance between them by feeding the constructed model to word2vec. We also present an approach called CombBL to further improve the accuracy of bug localization which employs not only the proposed DrewBL but also existing bug localization techniques, such as BugLocator based on textual similarity and Bugspots based on bug-fixing history, in a combinational manner. This paper gives our early experimental results to show the effectiveness and efficiency of the proposed approaches using two open source projects.", "title": "" }, { "docid": "749cfda68d5d7f09c0861dc723563db9", "text": "BACKGROUND\nOnline social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China.\n\n\nMETHODS\nA total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods.\n\n\nRESULTS\nTwo factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use.\n\n\nCONCLUSIONS\nThe SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.", "title": "" }, { "docid": "fdb9da0c4b6225c69de16411c79ac9dc", "text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.", "title": "" }, { "docid": "2a4360b7031aa9c191a81b1b14307db9", "text": "Wireless body area network (BAN) is a promising technology for real-time monitoring of physiological signals to support medical applications. In order to ensure the trustworthy and reliable gathering of patient's critical health information, it is essential to provide node authentication service in a BAN, which prevents an attacker from impersonation and false data/command injection. Although quite fundamental, the authentication in BAN still remains a challenging issue. On one hand, traditional authentication solutions depend on prior trust among nodes whose establishment would require either key pre-distribution or non-intuitive participation by inexperienced users, while they are vulnerable to key compromise. On the other hand, most existing non-cryptographic authentication schemes require advanced hardware capabilities or significant modifications to the system software, which are impractical for BANs.\n In this paper, for the first time, we propose a lightweight body area network authentication scheme (BANA) that does not depend on prior-trust among the nodes and can be efficiently realized on commercial off-the-shelf low-end sensor devices. This is achieved by exploiting physical layer characteristics unique to a BAN, namely, the distinct received signal strength (RSS) variation behaviors between an on-body communication channel and an off-body channel. Our main finding is that the latter is more unpredictable over time, especially under various body motion scenarios. This unique channel characteristic naturally arises from the multi-path environment surrounding a BAN, and cannot be easily forged by attackers. We then adopt clustering analysis to differentiate the signals from an attacker and a legitimate node. The effectiveness of BANA is validated through extensive real-world experiments under various scenarios. It is shown that BANA can accurately identify multiple attackers with minimal amount of overhead.", "title": "" }, { "docid": "a94c719e74c9416e74f286990e1c0478", "text": "Hemiparetic gait is characterized by slow speed and poorly coordinated movements. Because the values of gait parameters vary with changes in speed, the slow speed that is typical of hemiparetic gait necessitates applying controls for the influence of speed when comparing hemiparetic and able-bodied persons. Gait kinetics and kinematics were measured in seven hemiparetic and seven able-bodied adults to compare their gait patterns at similar speeds and to assess the effectiveness of ankle-foot orthoses which were double-stopped in 5 degrees of dorsiflexion or 5 degrees of plantarflexion. Hemiparetic persons ambulating without the orthoses had a shorter step length, longer duration stance, and shorter duration swing than normal. They displayed greater than normal flexion of the affected hip during midstance, which, by putting the center of mass farther in front of the knee, may explain the increased knee extension moment due to vertical force. Affected hip adduction during single support was less in hemiparetic persons than in able-bodied persons, indicating a decreased lateral shift to the paretic side. During the swing phase, the affected limbs of hemiparetic persons were in less knee flexion and less dorsiflexion than normal, necessitating circumduction to achieve toe clearance. Ankle-foot orthoses increased walking speed to normalize heelstrike duration through use of an optimally adjusted plantarflexion stop. An improperly adjusted orthosis may produce an exaggerated knee flexion moment resulting in knee instability.", "title": "" }, { "docid": "f3799058b34bc96ffa7e77810b9b1b2f", "text": "English. In this paper, we propose a Deep Learning architecture for sequence labeling based on a state of the art model that exploits both wordand characterlevel representations through the combination of bidirectional LSTM, CNN and CRF. We evaluate the proposed method on three Natural Language Processing tasks for Italian: PoS-tagging of tweets, Named Entity Recognition and Super-Sense Tagging. Results show that the system is able to achieve state of the art performance in all the tasks and in some cases overcomes the best systems previously developed for the Italian. Italiano. In questo lavoro viene descritta un’architettura di Deep Learning per l’etichettatura di sequenze basata su un modello allo stato dell’arte che utilizza rappresentazioni sia a livello di carattere che di parola attraverso la combinazione di LSTM, CNN e CRF. Il metodo è stato valutato in tre task di elaborazione del linguaggio naturale per la lingua italiana: il PoS-tagging di tweet, il riconoscimento di entità e il Super-Sense Tagging. I risultati ottenuti dimostrano che il sistema è in grado di raggiungere prestazioni allo stato dell’arte in tutti i task e in alcuni casi riesce a superare i sistemi precedentemente sviluppati per la lingua italiana. 1 Background and Motivation Deep Learning (DL) gained a lot of attention in last years for its capacity to generalize models without the need of feature engineering and its ability to provide good performance. On the other hand good performance can be achieved by accurately designing the architecture used to perform the learning task. In Natural Language Processing (NLP) several DL architectures have been proposed to solve many tasks, ranging from speech recognition to parsing. Some typical NLP tasks can be solved as sequence labeling problem, such as part-of-speech (PoS) tagging and Named Entity Recognition (NER). Traditional high performance NLP methods for sequence labeling are linear statistical models, including Conditional Random Fields (CRF) and Hidden Markov Models (HMM) (Ratinov and Roth, 2009; Passos et al., 2014; Luo et al., 2015), which rely on hand-crafted features and task/language specific resources. However, developing such task/language specific resources has a cost, moreover it makes difficult to adapt the model to new tasks, new domains or new languages. In (Ma and Hovy, 2016), the authors propose a state of the art sequence labeling method based on a neural network architecture that benefits from both wordand character-level representations through the combination of bidirectional LSTM, CNN and CRF. The method is able to achieve state of the art performance in sequence labeling tasks for the English without the use of hand-crafted features. In this paper, we exploit the aforementioned architecture for solving three NLP tasks in Italian: PoS-tagging of tweets, NER and Super Sense Tagging (SST). Our research question is to prove the effectiveness of the DL architecture in a different language, in this case Italian, without using language specific features. The results of the evaluation prove that our approach is able to achieve state of the art performance and in some cases it is able to overcome the best systems developed for the Italian without the usage of specific language resources. The paper is structured as follows: Section 2 provides details about our methodology and summarizes the DL architecture proposed in (Ma and Hovy, 2016), while Section 3 shows the results of the evaluation. Final remarks are reported in Section 4.", "title": "" }, { "docid": "658f2d045fe005ee1a4016b2de0ae1b1", "text": "Given a partial description like “she opened the hood of the car,” humans can reason about the situation and anticipate what might come next (“then, she examined the engine”). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning. We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals. Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.", "title": "" }, { "docid": "6539dddc2fe95b6d542d1654749af7eb", "text": "Botnets are the preeminent source of online crime and arguably the greatest threat to the Internet infrastructure. In this paper, we present ZombieCoin, a botnet command-and-control (C&C) mechanism that runs on the Bitcoin network. ZombieCoin offers considerable advantages over existing C&C techniques, most notably the fact that Bitcoin is designed to resist the very regulatory processes currently used to combat botnets. We believe this is a desirable avenue botmasters may explore in the near future and our work is intended as a first step towards devising effective countermeasures.", "title": "" }, { "docid": "f001f2933b3c96fe6954e086488776e0", "text": "Pd coated copper (PCC) wire and Au-Pd coated copper (APC) wire have been widely used in the field of LSI device. Recently, higher bond reliability at high temperature becomes increasingly important for on-vehicle devices. However, it has been reported that conventional PCC wire caused a bond failure at elevated temperatures. On the other hand, new-APC wire had higher reliability at higher temperature than conventional APC wire. New-APC wire has higher concentration of added element than conventional APC wire. In this paper, failure mechanism of conventional APC wire and improved mechanism of new-APC wire at high temperature were shown. New-APC wire is suitable for onvehicle devices.", "title": "" }, { "docid": "df152d3c4dd667b642415b14c25b4513", "text": "We propose a methodology for automatic synthesis of embedded control software that accounts for exogenous disturbances. The resulting system is guaranteed, by construction, to satisfy a given specification expressed in linear temporal logic. The embedded control software consists of three components: a goal generator, a trajectory planner, and a continuous controller. We demonstrate the effectiveness of the proposed technique through an example of an autonomous vehicle navigating an urban environment. This example also illustrates that the system is not only robust with respect to exogenous disturbances but also capable of handling violation of the environment assumptions.", "title": "" }, { "docid": "8ac205b5b2344b64e926a5e18e43322f", "text": "In 2015, Google's Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL's imperfections is its lack of \"exploration\" during the training process, especially when working with high-dimensional problems. In this paper, we propose a mixed strategy approach that mimics behaviors of human when interacting with environment, and create a \"thinking\" agent that allows for more efficient exploration in the DRL training process. The simulation results based on the Breakout game show that our scheme achieves a higher probability of obtaining a maximum score than does the baseline DRL algorithm, i.e., the asynchronous advantage actor-critic method. The proposed scheme therefore can be applied effectively to solving a complicated task in a real-world application.", "title": "" } ]
scidocsrr
6e6e4e745cc064800422b133d2f550ab
Effective Crop Productivity and Nutrient Level Monitoring in Agriculture Soil Using IOT
[ { "docid": "b3068a1b1acb0782d2c2b1dac65042cf", "text": "Measurement of N (nitrogen), P (phosphorus) and K ( potassium) contents of soil is necessary to decide how much extra contents of these nutrients are to b e added in the soil to increase crop fertility. Thi s improves the quality of the soil which in turn yields a good qua lity crop. In the present work fiber optic based c olor sensor has been developed to determine N, P, and K values in t he soil sample. Here colorimetric measurement of aq ueous solution of soil has been carried out. The color se nsor is based on the principle of absorption of col or by solution. It helps in determining the N, P, K amounts as high, m edium, low, or none. The sensor probe along with p roper signal conditioning circuits is built to detect the defici ent component of the soil. It is useful in dispensi ng only required amount of fertilizers in the soil.", "title": "" }, { "docid": "e5c6ed3e71cb971b5766a18facbc76f3", "text": "The main objective of the present paper is to develop a smart wireless sensor network (WSN) for an agricultural environment. Monitoring agricultural environment for various factors such as temperature and humidity along with other factors can be of significance. The advanced development in wireless sensor networks can be used in monitoring various parameters in agriculture. Due to uneven natural distribution of rain water it is very difficult for farmers to monitor and control the distribution of water to agriculture field in the whole farm or as per the requirement of the crop. There is no ideal irrigation method for all weather conditions, soil structure and variety of crops cultures. Farmers suffer large financial losses because of wrong prediction of weather and incorrect irrigation methods. Sensors are the essential device for precision agricultural applications. In this paper we have detailed about how to utilize the sensors in crop field area and explained about Wireless Sensor Network (WSN), Zigbee network, Protocol stack, zigbee Applications and the results are given, when implemented the zigbee network experimentally in real time environment.", "title": "" }, { "docid": "8bcc223389b7cc2ce2ef4e872a029489", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" } ]
[ { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "944dd53232522155103fc2d1578041dd", "text": "Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model’s estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm’s performance.", "title": "" }, { "docid": "6a5e82922bb1fa9c543adee821a06859", "text": "BACKGROUND\nThe most important factor which predisposes young people to suicide is depression, although protective factors such as self-esteem, emotional adaptation and social support may reduce the probability of suicidal ideation and suicide attempts. Several studies have indicated an elevated risk of suicide for health-related professions. Little is known, however, about the relationship between perceived emotional intelligence and suicide risk among nursing students.\n\n\nOBJECTIVES\nThe main goals were to determine the prevalence of suicide risk in a sample of nursing students, to examine the relationship between suicide risk and perceived emotional intelligence, depression, trait anxiety and self-esteem, and to identify any gender differences in relation to these variables.\n\n\nMETHOD\nCross-sectional study of nursing students (n=93) who completed self-report measures of perceived emotional intelligence (Trait Meta-Mood Scale, which evaluates three dimensions: emotional attention, clarity and repair), suicide risk (Plutchik Suicide Risk Scale), self-esteem (Rosenberg Self-esteem Scale), depression (Zung Self-Rating Depression Scale) and anxiety (Trait scale of the State-Trait Anxiety Inventory).\n\n\nRESULTS\nLinear regression analysis confirmed that depression and emotional attention are significant predictors of suicidal ideation. Moreover, suicide risk showed a significant negative association with self-esteem and with emotional clarity and repair. Gender differences were only observed in relation to depression, on which women scored significantly higher. Overall, 14% of the students were considered to present a substantial suicide risk.\n\n\nCONCLUSIONS\nThe findings suggest that interventions to prevent suicidal ideation among nursing students should include strategies to detect mood disorders (especially depression) and to improve emotional coping skills. In line with previous research the results indicate that high scores on emotional attention are linked to heightened emotional susceptibility and an increased risk of suicide. The identification and prevention of factors associated with suicidal behaviour in nursing students should be regarded as a priority.", "title": "" }, { "docid": "79cffed53f36d87b89577e96a2b2e713", "text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "title": "" }, { "docid": "d6f7a0572a3fec40a2db9b6a233de233", "text": "Nowadays people use web pages and email to share secret information. To ensure the secure message transformation, we used cryptography in combination with Steganography to achieve the desire results. To improve security, the encrypted message is hiding in HTML, XML and XHTML. Our technique is implemented in two levels of randomness i.e. at the file level and content level and encrypted with AES to achieve the maximum security. In addition, the proposed technique is using Unicode languages to take a secret message and has better capacity than the existing methodologies as only two spaces are required to hide the one character. The results show our technique provides high hidden capacity and security than an existing algorithm.", "title": "" }, { "docid": "8c0d50acd23e4995c4717ef049708a1c", "text": "What do you do to start reading introduction to computing and programming in python a multimedia approach 2nd edition? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this introduction to computing and programming in python a multimedia approach 2nd edition.", "title": "" }, { "docid": "76b081d26dc339218652cd6d7e0dfe4c", "text": "Software developers working on change tasks commonly experience a broad range of emotions, ranging from happiness all the way to frustration and anger. Research, primarily in psychology, has shown that for certain kinds of tasks, emotions correlate with progress and that biometric measures, such as electro-dermal activity and electroencephalography data, might be used to distinguish between emotions. In our research, we are building on this work and investigate developers' emotions, progress and the use of biometric measures to classify them in the context of software change tasks. We conducted a lab study with 17 participants working on two change tasks each. Participants were wearing three biometric sensors and had to periodically assess their emotions and progress. The results show that the wide range of emotions experienced by developers is correlated with their perceived progress on the change tasks. Our analysis also shows that we can build a classifier to distinguish between positive and negative emotions in 71.36% and between low and high progress in 67.70% of all cases. These results open up opportunities for improving a developer's productivity. For instance, one could use such a classifier for providing recommendations at opportune moments when a developer is stuck and making no progress.", "title": "" }, { "docid": "2f98ed3e1ddc2eee9b8e4309c125a925", "text": "With the rise of social networking sites (SNSs), individuals not only disclose personal information but also share private information concerning others online. While shared information is co-constructed by self and others, personal and collective privacy boundaries become blurred. Thus there is an increasing concern over information privacy beyond the individual perspective. However, limited research has empirically examined if individuals are concerned about privacy loss not only of their own but their social ties’; nor is there an established instrument for measuring the collective aspect of individuals’ privacy concerns. In order to address this gap in existing literature, we propose a conceptual framework of individuals’ collective privacy concerns in the context of SNSs. Drawing on the Communication Privacy Management (CPM) theory (Petronio, 2002), we suggest three dimensions of collective privacy concerns, namely, collective information access, control and diffusion. This is followed by the development and empirical validation of a preliminary scale of SNS collective privacy concerns (SNSCPC). Structural model analyses confirm the three-dimensional conceptualization of SNSCPC and reveal antecedents of SNS users’ concerns over violations of the collective privacy boundaries. This paper serves as a starting point for theorizing privacy as a collective notion and for understanding online information disclosure as a result of social interaction and group influence.", "title": "" }, { "docid": "643abdb4e164e15d95185b40d7f57ff4", "text": "Some actions are freer than others, and the difference is palpably important in terms of inner process, subjective perception, and social consequences. Psychology can study the difference between freer and less free actions without making dubious metaphysical commitments. Human evolution seems to have created a relatively new, more complex form of action control that corresponds to popular notions of free will. It is marked by self-control and rational choice, both of which are highly adaptive, especially for functioning within culture. The processes that create these forms of free will may be biologically costly and therefore are only used occasionally, so that people are likely to remain only incompletely self-disciplined, virtuous, and rational.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "8ea1c8609b2c9e52574bed84236e77fa", "text": "We address the problem of person detection and tracking in crowded video scenes. While the detection of individual objects has been improved significantly over the recent years, crowd scenes remain particularly challenging for the detection and tracking tasks due to heavy occlusions, high person densities and significant variation in people's appearance. To address these challenges, we propose to leverage information on the global structure of the scene and to resolve all detections jointly. In particular, we explore constraints imposed by the crowd density and formulate person detection as the optimization of a joint energy function combining crowd density estimation and the localization of individual people. We demonstrate how the optimization of such an energy function significantly improves person detection and tracking in crowds. We validate our approach on a challenging video dataset of crowded scenes.", "title": "" }, { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" }, { "docid": "759bb2448f1d34d3742fec38f273135e", "text": "Although below-knee prostheses have been commercially available for some time, today's devices are completely passive, and consequently, their mechanical properties remain fixed with walking speed and terrain. A lack of understanding of the ankle-foot biomechanics and the dynamic interaction between an amputee and a prosthesis is one of the main obstacles in the development of a biomimetic ankle-foot prosthesis. In this paper, we present a novel ankle-foot emulator system for the study of human walking biomechanics. The emulator system is comprised of a high performance, force-controllable, robotic ankle-foot worn by an amputee interfaced to a mobile computing unit secured around his waist. We show that the system is capable of mimicking normal ankle-foot walking behaviour. An initial pilot study supports the hypothesis that the emulator may provide a more natural gait than a conventional passive prosthesis", "title": "" }, { "docid": "180a840a22191da6e9a99af3d41ab288", "text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.", "title": "" }, { "docid": "10baebc8e9a0071cbe73d66ccaec3a50", "text": "In this paper, the switched-capacitor concept is extended to the voltage-doubler discontinuous conduction mode SEPIC rectifier. As a result, a set of single-phase hybrid SEPIC power factor correction rectifiers able to provide lower voltage stress on the semiconductors and/or higher static gain, which can be easily increased with additional switched-capacitor cells, is proposed. Hence, these rectifiers could be employed in applications that require higher output voltage. In addition, the converters provide a high power factor and a reduced total harmonic distortion in the input current. The topology employs a three-state switch, and three different implementations are described, two being bridgeless versions, which can provide gains in relation to efficiency. The structures and the topological states, a theoretical analysis in steady state, a dynamic model for control, and a design example are reported herein. Furthermore, a prototype with specifications of 1000-W output power, 220-V input voltage, 800-V output voltage, and 50-kHz switching frequency was designed in order to verify the theoretical analysis.", "title": "" }, { "docid": "2eb6a9d5c964c01ed48fdc9350c76f6f", "text": "The deployment of distributed energy resources, combined with a more proactive demand side, is inducing a new paradigm in power system operation and electricity markets. Within a consumer-centric market framework, peer-to-peer approaches have gained substantial interest. Peer-to-peer markets rely on multi-bilateral direct negotiation among all players to match supply and demand, and with product differentiation. These markets can yield a complete mapping of exchanges onto the grid, hence allowing to rethink our approach to sharing costs related to usage of common infrastructure and services. We propose here to attribute such costs in a number of alternative ways that reflects different views on usage of the grid and on cost allocation, i.e., uniformly and based on the electrical distance between players. Since attribution mechanisms are defined in an exogenous manner and made transparent they eventually affect the trades of the market participants and related grid usage. The interest of our approach is illustrated on a test case using the IEEE 39 bus test system, underlying the impact of attribution mechanisms on trades and grid usage.", "title": "" }, { "docid": "6f1d7e2faff928c80898bfbf05ac0669", "text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage  = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.", "title": "" }, { "docid": "bf241075beac4fedfb0ad9f8551c652d", "text": "This paper discloses a new very broadband compact transition between double-ridge waveguide and coaxial line. The transition includes an original waveguide to coaxial mode converter and modified impedance transformer. Very good performance is predicted theoretically and confirmed experimentally over a 3:1 bandwidth.", "title": "" }, { "docid": "863d5bae0f02e39d071e3b7fdd71bf04", "text": "UNLABELLED\nAfter successful cancer pain initiatives, efforts have been recently made to liberalize the use of opioids for the treatment of chronic nonmalignant pain. However, the goals for this treatment and its place among other available treatments are still unclear. Cancer pain treatment is aimed at patient comfort and is validated by objective disease severity. For chronic nonmalignant pain, however, comfort alone is not an adequate treatment goal, and pain is not usually proportional to objective disease severity. Therefore, confusion about treatment goals and doubts about the reality of nonmalignant pain entangle therapeutic efforts. We present a case history to demonstrate that this lack of proportionality fosters fears about malingering, exaggeration, and psychogenic pain among providers. Doubt concerning the reality of patients' unrelieved chronic nonmalignant pain has allowed concerns about addiction to dominate discussions of treatment. We propose alternate patient-centered principles to guide efforts to relieve chronic nonmalignant pain, including accept all patient pain reports as valid but negotiate treatment goals early in care, avoid harming patients, and incorporate chronic opioids as one part of the treatment plan if they improve the patient's overall health-related quality of life. Although an outright ban on opioid use in chronic nonmalignant pain is no longer ethically acceptable, ensuring that opioids provide overall benefit to patients requires significant time and skill. Patients with chronic nonmalignant pain should be assessed and treated for concurrent psychiatric disorders, but those with disorders are entitled to equivalent efforts at pain relief. The essential question is not whether chronic nonmalignant pain is real or proportional to objective disease severity, but how it should be managed so that the patient's overall quality of life is optimized.\n\n\nPERSPECTIVE\nThe management of chronic nonmalignant pain is moving from specialty settings into primary care. Primary care providers need an ethical framework within which to adopt the principles of palliative care to this population.", "title": "" }, { "docid": "17cbead431425018818b649b1b69b527", "text": "In this letter, a flexible memory simulator - NVMain 2.0, is introduced to help the community for modeling not only commodity DRAMs but also emerging memory technologies, such as die-stacked DRAM caches, non-volatile memories (e.g., STT-RAM, PCRAM, and ReRAM) including multi-level cells (MLC), and hybrid non-volatile plus DRAM memory systems. Compared to existing memory simulators, NVMain 2.0 features a flexible user interface with compelling simulation speed and the capability of providing sub-array-level parallelism, fine-grained refresh, MLC and data encoder modeling, and distributed energy profiling.", "title": "" } ]
scidocsrr
9ebd8d3fd285b5f797dab5c1aca6ed97
Understanding Search-Engine Optimization
[ { "docid": "e135ec51f4406f42625c6610ca926b7b", "text": "Search engines became a de facto place to start information acquisition on the Web. Though due to web spam phenomenon, search results are not always as good as desired. Moreover, spam evolves that makes the problem of providing high quality search even more challenging. Over the last decade research on adversarial information retrieval has gained a lot of interest both from academia and industry. In this paper we present a systematic review of web spam detection techniques with the focus on algorithms and underlying principles. We categorize all existing algorithms into three categories based on the type of information they use: content-based methods, link-based methods, and methods based on non-traditional data such as user behaviour, clicks, HTTP sessions. In turn, we perform a subcategorization of link-based category into five groups based on ideas and principles used: labels propagation, link pruning and reweighting, labels refinement, graph regularization, and featurebased. We also define the concept of web spam numerically and provide a brief survey on various spam forms. Finally, we summarize the observations and underlying principles applied for web spam detection.", "title": "" } ]
[ { "docid": "d8042183e064ffba69b54246b17b9ff4", "text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.", "title": "" }, { "docid": "4ccc1e8988941745f296db4f548dd11d", "text": "During recent decades increasing interest has been shown in the development of bioelectronic sensors based on ion sensitive field effect transistors (ISFETs). Many ISFET-based pH sensors have been commercialized and attempts have also been made to commercialize ISFET based bioelectronic sensors for applications in the fields of medical, environmental, food safety, military and biotechnology areas. The growing interest for development of these sensors is due to the fact that they are manufactured by means of semiconductor technology which has already entered into multifunctional “More than Moore” regime that is increasingly multidisciplinary in nature. Technology involved has, therefore, innovative potential that may result in the appearance of new sensor and device technologies in future. The basic theoretical principles of ISFET usage in bioanalytical practice, the operation principle of ISFET, its modeling and a brief introduction of ISFET technology are considered in this review.", "title": "" }, { "docid": "161e66a9e10df9c31b5920788ad8e791", "text": "Our goal is to develop a compositional real-time scheduling framework so that global (system-level) timing properties can be established by composing independently (specified and) analyzed local (component-level) timing properties. The two essential problems in developing such a framework are: (1) to abstract the collective real-time requirements of a component as a single real-time requirement and (2) to compose the component demand abstraction results into the system-level real-time requirement. In our earlier work, we addressed the problems using the Liu and Layland periodic model. In this paper, we address the problems using another well-known model, a bounded-delay resource partition model, as a solution model to the problems. To extend our framework to this model, we develop an exact feasibility condition for a set of bounded-delay tasks over a bounded-delay resource partition. In addition, we present simulation results to evaluate the overheads that the component demand abstraction results incur in terms of utilization increase. We also present utilization bound results on a bounded-delay resource model.", "title": "" }, { "docid": "e2b1c4da96ea677fd50aa44abc86d119", "text": "The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d0e977ab137cd004420bda28bd0b11be", "text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.", "title": "" }, { "docid": "93c2ed30659e6b9c2020866cd3670705", "text": "Longitudinal melanonychia (LM) is a pigmented longitudinal band of the nail unit, which results from pigment deposition, generally melanin, in the nail plate. Such lesion is frequently observed in specific ethnic groups, such as Asians and African Americans, typically affecting multiple nails. When LM involves a single nail plate, it may be the sign of a benign lesion within the matrix, such as a melanocytic nevus, simple lentigo, or nail matrix melanocyte activation. However, the possibility of melanoma must be considered. Nail melanoma in children is exceptionally rare and only 2 cases have been reported in fairskinned Caucasian individuals.", "title": "" }, { "docid": "cc99e806503b158aa8a41753adecd50c", "text": "Semantic Mutation Testing (SMT) is a technique that aims to capture errors caused by possible misunderstandings of the semantics of a description language. It is intended to target a class of errors which is different from those captured by traditional Mutation Testing (MT). This paper describes our experiences in the development of an SMT tool for the C programming language: SMT-C. In addition to implementing the essential requirements of SMT (generating semantic mutants and running SMT analysis) we also aimed to achieve the following goals: weak MT/SMT for C, good portability between different configurations, seamless integration into test routines of programming with C and an easy to use front-end.", "title": "" }, { "docid": "5b0e088e2bddd0535bc9d2dfbfeb0298", "text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.", "title": "" }, { "docid": "091eedcd69373f99419a745f2215e345", "text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.", "title": "" }, { "docid": "2df35b05a40a646ba6f826503955601a", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "7749b46bc899b3d876d63d8f3d0981ea", "text": "This paper details the control and guidance architecture for the T-wing tail-sitter unmanned air vehicle, (UAV). The T-wing is a vertical take off and landing (VTOL) UAV that is capable of both wing-born horizontal flight and propeller born vertical mode flight including hover and descent. During low-speed vertical flight the T-wing uses propeller wash over its aerodynamic surfaces to effect control. At the lowest level, the vehicle uses a mixture of classical and LQR controllers for angular rate and translational velocity control. These low-level controllers are directed by a series of proportional guidance controllers for the vertical, horizontal and transition flight modes that allow the vehicle to achieve autonomous waypoint navigation. The control design for the T-wing is complicated by the large differences in vehicle dynamics between vertical and horizontal flight; the difficulty of accurately predicting the low-speed vehicle aerodynamics; and the basic instability of the vertical flight mode. This paper considers the control design problem for the T-wing in light of these factors. In particular it focuses on the integration of all the different types and levels of controllers into a full flight-vehicle control system.", "title": "" }, { "docid": "4306bc8a6f1e1bab2ffeb175d7dfeb0f", "text": "This paper describes the design and evaluation of a method for developing a chat-oriented dialog system by utilizing real human-to-human conversation examples from movie scripts and Twitter conversations. The aim of the proposed method is to build a conversational agent that can interact with users in as natural a fashion as possible, while reducing the time requirement for database design and collection. A number of the challenging design issues we faced are described, including (1) constructing an appropriate dialog corpora from raw movie scripts and Twitter data, and (2) developing an multi domain chat-oriented dialog management system which can retrieve a proper system response based on the current user query. To build a dialog corpus, we propose a unit of conversation called a tri-turn (a trigram conversation turn), as well as extraction and semantic similarity analysis techniques to help ensure that the content extracted from raw movie/drama script files forms appropriate dialog-pair (query-response) examples. The constructed dialog corpora are then utilized in a data-driven dialog management system. Here, various approaches are investigated including example-based (EBDM) and response generation using phrase-based statistical machine translation (SMT). In particular, we use two EBDM: syntactic-semantic similarity retrieval and TF-IDF based cosine similarity retrieval. Experiments are conducted to compare and contrast EBDM and SMT approaches in building a chat-oriented dialog system, and we investigate a combined method that addresses the advantages and disadvantages of both approaches. System performance was evaluated based on objective metrics (semantic similarity and cosine similarity) and human subjective evaluation from a small user study. Experimental results show that the proposed filtering approach effectively improve the performance. Furthermore, the results also show that by combing both EBDM and SMT approaches, we could overcome the shortcomings of each. key words: dialog corpora, response generation, example-based dialog modeling, semantic similarity, cosine similarity, machine translation", "title": "" }, { "docid": "e4e0e01b3af99dfd88ff03a1057b40d3", "text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.", "title": "" }, { "docid": "c29b91a5b580a620bb245519695a6cd9", "text": "It is commonly believed that datacenter networking software must sacri ce generality to attain high performance. The popularity of specialized distributed systems designed speci cally for niche technologies such as RDMA, lossless networks, FPGAs, and programmable switches testi es to this belief. In this paper, we show that such specialization is unnecessary. eRPC is a new general-purpose remote procedure call (RPC) library that o ers performance comparable to specialized systems, while running on commodity CPUs in traditional datacenter networks based on either lossy Ethernet or lossless fabrics. eRPC performs well in three key metrics: message rate for small messages; bandwidth for large messages; and scalability to a large number of nodes and CPU cores. It handles packet loss, congestion, and background request execution. In microbenchmarks, one CPU core can handle up to 5 million small eRPC requests per second, or saturate a 40 Gbps link with large messages. We port a production-grade implementation of Raft state machine replication to eRPC without modifying the core Raft source code. We achieve 5.5 μs of replication latency on lossy Ethernet, which is faster or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA.", "title": "" }, { "docid": "4681e8f07225e305adfc66cd1b48deb8", "text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.", "title": "" }, { "docid": "48dfee242d5daf501c72e14e6b05c3ba", "text": "One possible alternative to standard in vivo exposure may be virtual reality exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure (VRE) is potentially an efficient and cost-effective treatment of anxiety disorders. VRE therapy has been successful in reducing the fear of heights in the first known controlled study of virtual reality in the treatment of a psychological disorder. Outcome was assessed on measures of anxiety, avoidance, attitudes, and distress. Significant group differences were found on all measures such that the VRE group was significantly improved at posttreatment but the control group was unchanged. The efficacy of virtual reality exposure therapy was also supported for the fear of flying in a case study. The potential for virtual reality exposure treatment for these and other disorders is explored.", "title": "" }, { "docid": "b5ee9aa4463d313c9f22e085af4fe541", "text": "A comprehensive first visit with a gynecologist can lay the groundwork for positive health outcomes throughout a female adolescent's life. This visit gives the clinician the opportunity to gauge both the physical and psychosocial health and development of the adolescent patient. Physical screening should be combined with an assessment of the patient's environment and risk behaviors along with counseling on healthy behavior for both the patient and her parent or guardian.", "title": "" }, { "docid": "6214002bfc73399b233452fc1ac5f85e", "text": "Touchscreen-based mobile devices (TMDs) are one of the most popular and widespread kind of electronic device. Many manufacturers have published its own design principles as a guideline for developers. Each platform has specific constrains and recommendations for software development; specially in terms of user interface. Four sets of design principles from iOS, Windows Phone, Android and Tizen OS has been mapped against a set of usability heuristics for TMDs. The map shows that the TMDs usability heuristics cover almost every design pattern with the addition of two new dimensions: user experience and cognitive load. These new dimensions will be considered when updating the proposal of usability heuristics for TMDs.", "title": "" }, { "docid": "b856143940b19888422c0c8bf5a3b441", "text": "Most statistical machine translation systems use phrase-to-phrase translations to capture local context information, leading to better lexical choice and more reliable local reordering. The quality of the phrase alignment is crucial to the quality of the resulting translations. Here, we propose a new phrase alignment method, not based on the Viterbi path of word alignment models. Phrase alignment is viewed as a sentence splitting task. For a given spitting of the source sentence (source phrase, left segment, right segment) find a splitting for the target sentence, which optimizes the overall sentence alignment probability. Experiments on different translation tasks show that this phrase alignment method leads to highly competitive translation results.", "title": "" }, { "docid": "f25aef35500ed74e5ef41d5e45d2e2df", "text": "With recommender systems, users receive items recommended on the basis of their profile. New users experience the cold start problem: as their profile is very poor, the system performs very poorly. In this paper, classical new user cold start techniques are improved by exploiting the cold user data, i.e. the user data that is readily available (e.g. age, occupation, location, etc.), in order to automatically associate the new user with a better first profile. Relying on the existing α-community spaces model, a rule-based induction process is used and a recommendation process based on the \"level of agreement\" principle is defined. The experiments show that the quality of recommendations compares to that obtained after a classical new user technique, while the new user effort is smaller as no initial ratings are asked.", "title": "" } ]
scidocsrr
5ac7b32e29ba3d4e6ffe565e21edcdeb
Clinical Abbreviation Disambiguation Using Neural Word Embeddings
[ { "docid": "270e593aa89fb034d0de977fe6d618b2", "text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "8fe78f684d75005477e3a4b1e6cf78d1", "text": "Yamazaki et al. [1] investigated the effect of prediabetes on subsequent pancreatic fat accumulation, based on the hypothesis that pancreatic fat was a manifestation of disturbed glucose metabolism. Prediabetes was defined as fasting plasma glucose of 100–125 mg/dl or hemoglobin A1c of 5.7–6.4%, and the change of pancreatic fat was evaluated by computed tomography (CT). A total of 198 nondiabetic participants were composed of 48 prediabetes and 150 non-prediabetes participants. By multiple linear regression analysis, baseline prediabetes was associated with future pancreatic fat accumulation with beta value (95% confidence interval) of 3.14 (1.25–5.03). In addition, body mass index (BMI) and impaired fasting glucose (IFG) were also risk factors of pancreatic fat accumulation. I have some queries on their study. First, the authors used prediabetes or IFG as an independent variable for the change of pancreatic fat accumulation, by adjusting several variables. As impaired glucose tolerance (IGT) value could not be used as an independent variable, the lack of IGT information for the definition of prediabetes should be specified by further study [2]. Second, BMI was selected as a significant independent variable for the change of pancreatic fat accumulation. I suppose that the amount of visceral fat at baseline by CT could also be used as an independent variable. Although liver fat did not become a predictor, visceral fat as another obesity indicator should be checked for the analysis [3]. Finally, the authors selected multiple linear regression analysis. I think that the authors could use prediabetes indictors at baseline as continuous variables. In addition, the change of prediabetes information can be used in combination with the change of pancreatic fat accumulation. Anyway, further studies are needed to know the causal association to confirm the hypothesis that pancreatic fat is a manifestation of disturbed glucose metabolism.", "title": "" }, { "docid": "11e438be8bc9a00f636f15d1ff4266e5", "text": "Cyber attacks are growing in frequency and severity. Over the past year alone we have witnessed massive data breaches that stole personal information of millions of people and wide-scale ransomware attacks that paralyzed critical infrastructure of several countries. Combating the rising cyber threat calls for a multi-pronged strategy, which includes predicting when these attacks will occur. The intuition driving our approach is this: during the planning and preparation stages, hackers leave digital traces of their activities on both the surface web and dark web in the form of discussions on platforms like hacker forums, social media, blogs and the like. These data provide predictive signals that allow anticipating cyber attacks. In this paper, we describe machine learning techniques based on deep neural networks and autoregressive time series models that leverage external signals from publicly available Web sources to forecast cyber attacks. Performance of our framework across ground truth data over real-world forecasting tasks shows that our methods yield a significant lift or increase of F1 for the top signals on predicted cyber attacks. Our results suggest that, when deployed, our system will be able to provide an effective line of defense against various types of targeted cyber attacks.", "title": "" }, { "docid": "cc46973ff9bbaf540f3e8facbd44de68", "text": "Molecular \"fingerprints\" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.", "title": "" }, { "docid": "9e2dc31edf639e1201c3a3d59f3381af", "text": "The AMBA-AHB Multilayer Bus matrix Self-Motivated Arbitration scheme proposed three methods for data transmiting from master to slave for on chip communication. Multilayer advanced high-performance bus (ML-AHB) busmatrix employs slave-side arbitration. Slave-side arbitration is different from master-side arbitration in terms of request and grant signals since, in the former, the master merely starts a burst transaction and waits for the slave response to proceed to the next transfer. Therefore, in the former, the unit of arbitration can be a transaction or a transfer. However, the ML-AHB busmatrix of ARM offers only transferbased fixed-pri-ority and round-robin arbitration schemes. In this paper, we propose the design and implementation of a flexible arbiter for the ML-AHB busmatrix to support three priority policies fixed priority, round robin, and dynamic priority and three data multiplexing modes transfer, transaction, and desired transfer length. In total, there are nine possible arbitration schemes. The proposed arbiter, which is self-motivated (SM), selects one of the nine possible arbitration schemes based upon the priority-level notifications and the desired transfer length from the masters so that arbitration leads to the maximum performance. Experimental results show that, although the area overhead of the proposed SM arbitration scheme is 9%–25% larger than those of the other arbitration schemes, our arbiter improves the throughput by 14%–62% compared to other schemes.", "title": "" }, { "docid": "0658cfdb376ad21f0d02ea77c5dac20c", "text": "Since 2004 the European Commission’s Joint Research Centre (JRC) has been analysing the online version of printed media in over twenty languages and has automatically recognised and compiled large amounts of named entities (persons and organisations) and their many name variants. The collected variants not only include standard spellings in various countries, languages and scripts, but also frequently found spelling mistakes or lesser used name forms, all occurring in real-life text (e.g. Benjamin/Binyamin/Bibi/Benyamín/Biniamin/Беньямин/نیماینب Netanyahu/Netanjahu/Nétanyahou/Netahny/Нетаньяху/وهاینتن). This entity name variant data, known as JRCNames, has been available for public download since 2011. In this article, we report on our efforts to render JRC-Names as Linked Data (LD), using the lexicon model for ontologies lemon. Besides adhering to Semantic Web standards, this new release goes beyond the initial one in that it includes titles found next to the names, as well as date ranges when the titles and the name variants were found. It also establishes links towards existing datasets, such as DBpedia and Talk-Of-Europe. As multilingual linguistic linked dataset, JRC-Names can help bridge the gap between structured data and natural languages, thus supporting large-scale data integration, e.g. cross-lingual mapping, and web-based content processing, e.g. entity linking. JRC-Names is publicly available through the dataset catalogue of the European Union’s Open Data Portal.", "title": "" }, { "docid": "dbc11b8d76eb527444ead3b2168aa2c2", "text": "In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. To this end, we introduce a new model for statistical relational learning that is built upon deep recursive neural networks, and give experimental evidence that it can easily compete with, or even outperform, existing logic-based reasoners on the task of ontology reasoning. More precisely, we compared our implemented system with one of the best logic-based ontology reasoners at present, RDFox, on a number of large standard benchmark datasets, and found that our system attained high reasoning quality, while being up to two orders of magnitude faster.", "title": "" }, { "docid": "553de71fcc3e4e6660015632eee751b1", "text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.", "title": "" }, { "docid": "0c34e8355f1635b3679159abd0a82806", "text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "title": "" }, { "docid": "9049805c56c9b7fc212fdb4c7f85dfe1", "text": "Intentions (6) Do all the important errands", "title": "" }, { "docid": "76e6c05e41c4e6d3c70c8fedec5c323b", "text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.", "title": "" }, { "docid": "38fccb4ef1b53ccc8464beaf74db2b4b", "text": "The novel concept of total generalized variation of a function u is introduced and some of its essential properties are proved. Differently from the bounded variation semi-norm, the new concept involves higher order derivatives of u. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and does not lead to a staircasing effect.", "title": "" }, { "docid": "054443e445ec15d7a54215d3d201bb04", "text": "In this study, a survey of the scientific literature in the field of optimum and preferred human joint angles in automotive sitting posture was conducted by referring to thirty different sources published between 1940 and today. The strategy was to use only sources with numerical angle data in combination with keywords. The aim of the research was to detect commonly used joint angles in interior car design. The main analysis was on data measurement, usability and comparability of the different studies. In addition, the focus was on the reasons for the differently described results. It was found that there is still a lack of information in methodology and description of background. Due to these reasons published data is not always usable to design a modern ergonomic car environment. As a main result of our literature analysis we suggest undertaking further research in the field of biomechanics and ergonomics to work out scientific based and objectively determined \"optimum\" joint angles in automotive sitting position.", "title": "" }, { "docid": "c166a5ac33c4bf0ffe055578f016e72f", "text": "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).", "title": "" }, { "docid": "527e9ca65382700e13f27b20f1f923a8", "text": "In this work we introduce the new problem of finding time seriesdiscords. Time series discords are subsequences of longer time series that are maximally different to all the rest of the time series subsequences. They thus capture the sense of the most unusual subsequence within a time series. While discords have many uses for data mining, they are particularly attractive as anomaly detectors because they only require one intuitive parameter (the length of the subsequence) unlike most anomaly detection algorithms that typically require many parameters. While the brute force algorithm to discover time series discords is quadratic in the length of the time series, we show a simple algorithm that is three to four orders of magnitude faster than brute force, while guaranteed to produce identical results. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, space telemetry, respiration physiology, anthropological and video datasets.", "title": "" }, { "docid": "4ad4cd6cc77dae0fea4f2cc05651cec4", "text": "BACKGROUND\nDementia is a clinical syndrome with a number of different causes which is characterised by deterioration in cognitive, behavioural, social and emotional functions. Pharmacological interventions are available but have limited effect to treat many of the syndrome's features. Less research has been directed towards non-pharmacological treatments. In this review, we examined the evidence for effects of music-based interventions as a treatment.\n\n\nOBJECTIVES\nTo assess the effects of music-based therapeutic interventions for people with dementia on emotional well-being including quality of life, mood disturbance or negative affect, behavioural problems, social behaviour, and cognition at the end of therapy and four or more weeks after the end of treatment.\n\n\nSEARCH METHODS\nWe searched ALOIS, the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (CDCIG) on 14 April 2010 using the terms: music therapy, music, singing, sing, auditory stimulation. Additional searches were also carried out on 3 July 2015 in the major healthcare databases MEDLINE, Embase, psycINFO, CINAHL and LILACS; and in trial registers and grey literature sources. On 12 April 2016, we searched the major databases for new studies for future evaluation.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials of music-based therapeutic interventions (at least five sessions) for people with dementia that measured any of our outcomes of interest. Control groups either received usual care or other activities.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers worked independently to screen the retrieved studies against the inclusion criteria and then to extract data and assess methodological quality of the included studies. If necessary, we contacted trial authors to ask for additional data, including relevant subscales, or for other missing information. We pooled data using random-effects models.\n\n\nMAIN RESULTS\nWe included 17 studies. Sixteen studies with a total of 620 participants contributed data to meta-analyses. Participants in the studies had dementia of varying degrees of severity, but all were resident in institutions. Five studies delivered an individual music intervention; in the others, the intervention was delivered to groups of participants. Most interventions involved both active and receptive musical elements. The methodological quality of the studies varied. All were at high risk of performance bias and some were at high risk of detection or other bias. At the end of treatment, we found low-quality evidence that music-based therapeutic interventions may have little or no effect on emotional well-being and quality of life (standardized mean difference, SMD 0.32, 95% CI -0.08 to 0.71; 6 studies, 181 participants), overall behaviour problems (SMD -0.20, 95% CI -0.56 to 0.17; 6 studies, 209 participants) and cognition (SMD 0.21, 95% CI -0.04 to 0.45; 6 studies, 257 participants). We found moderate-quality evidence that they reduce depressive symptoms (SMD -0.28, 95% CI -0.48 to -0.07; 9 studies, 376 participants), but do not decrease agitation or aggression (SMD -0.08, 95% CI -0.29 to 0.14; 12 studies, 515 participants). The quality of the evidence on anxiety and social behaviour was very low, so effects were very uncertain. The evidence for all long-term outcomes was also of very low quality.\n\n\nAUTHORS' CONCLUSIONS\nProviding people with dementia with at least five sessions of a music-based therapeutic intervention probably reduces depressive symptoms but has little or no effect on agitation or aggression. There may also be little or no effect on emotional well-being or quality of life, overall behavioural problems and cognition. We are uncertain about effects on anxiety or social behaviour, and about any long-term effects. Future studies should employ larger sample sizes, and include all important outcomes, in particular 'positive' outcomes such as emotional well-being and social outcomes. Future studies should also examine the duration of effects in relation to the overall duration of treatment and the number of sessions.", "title": "" }, { "docid": "d8d894979e233b8fcf557b1b0f8050c0", "text": "Cannabinoids are a class of chemical compounds with a wide spectrum of pharmacological effects, mediated by two specific plasma membrane receptors (CB1 and CB2). Recently, CB1 and CB2 expression levels have been detected in human tumors, including those of brain. Cannabinoids-endocannabinoids exert anti-inflammatory, anti-proliferative, anti-invasive, anti-metastatic and pro-apoptotic effects in different cancer types, both in vitro and in vivo in animal models, after local or systemic administration. We present the available experimental and clinical data, to date, regarding the antitumor action of cannabinoids on the tumorigenesis of gliomas.", "title": "" }, { "docid": "e81cffe3f2f716520ede92d482ddab34", "text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.", "title": "" }, { "docid": "a02fb872137fe7bc125af746ba814849", "text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.", "title": "" }, { "docid": "ff5d3f4ef4431c7144c12f5da563e347", "text": "Ankle inversion-eversion compliance is an important feature of conventional prosthetic feet, and control of inversion, or roll, in robotic prostheses could improve balance for people with amputation. We designed a tethered ankle-foot prosthesis with two independently-actuated toes that are coordinated to provide plantarflexion and inversion-eversion torques. This configuration allows a simple lightweight structure with a total mass of 0.72 kg. Strain gages on the toes measure torque with less than 2.7% RMS error, while compliance in the Bowden cable tether provides series elasticity. Benchtop tests demonstrated a 90% rise time of less than 33 ms and peak torques of 180 N·m in plantarflexion and ±30 N·m in inversion-eversion. The phase-limited closedloop torque bandwidth is 20 Hz with a 90 N·m amplitude chirp in plantarflexion, and 24 Hz with a 20 N·m amplitude chirp in inversion-eversion. The system has low sensitivity to toe position disturbances at frequencies of up to 18 Hz. Walking trials with five values of constant inversion-eversion torque demonstrated RMS torque tracking errors of less than 3.7% in plantarflexion and less than 5.9% in inversion-eversion. These properties make the platform suitable for haptic rendering of virtual devices in experiments with humans, which may reveal strategies for improving balance or allow controlled comparisons of conventional prosthesis features. A similar morphology may be effective for autonomous devices.", "title": "" } ]
scidocsrr
9bee5060999d2e5c57874f9a8df23f13
DataSpotting: offloading cellular traffic via managed device-to-device data transfer at data spots
[ { "docid": "6e53c13c4da3f985f85d56d2c9b037e6", "text": "Simulating human mobility is important in mobile networks because many mobile devices are either attached to or controlled by humans and it is very hard to deploy real mobile networks whose size is controllably scalable for performance evaluation. Lately various measurement studies of human walk traces have discovered several significant statistical patterns of human mobility. Namely these include truncated power-law distributions of flights, pause-times and inter-contact times, fractal way-points, and heterogeneously defined areas of individual mobility. Unfortunately, none of existing mobility models effectively captures all of these features. This paper presents a new mobility model called SLAW (Self-similar Least Action Walk) that can produce synthetic walk traces containing all these features. This is by far the first such model. Our performance study using using SLAW generated traces indicates that SLAW is effective in representing social contexts present among people sharing common interests or those in a single community such as university campus, companies and theme parks. The social contexts are typically common gathering places where most people visit during their daily lives such as student unions, dormitory, street malls and restaurants. SLAW expresses the mobility patterns involving these contexts by fractal waypoints and heavy-tail flights on top of the waypoints. We verify through simulation that SLAW brings out the unique performance features of various mobile network routing protocols.", "title": "" }, { "docid": "7f6edf82ddbe5b63ba5d36a7d8691dda", "text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.", "title": "" } ]
[ { "docid": "9d06d7d2a152472cc1079cf17d7b6f63", "text": "Smart cities have rapidly become a hot topic within technology communities, and promise both improved delivery of services to end users and reduced environmental impact in an era of unprecedented urbanization. Both large hightech companies and grassroots citizen-led initiatives have begun exploring the potential of these technologies. Significant barriers remain to the successful rollout and deployment of business models outlined for smart city applications and services, however. Most of these barriers pertain to an ongoing battle between two main schools of thought for system architecture, ICT and telecommunications, proposed for data management and service creation. Both of these system architectures represent a certain type of value chain and the legacy perspective of the respective players that wish to enter the smart city arena. Smart cities services, however, utilize components of both the ICT industry and mobile telecommunications industries, and do not benefit from the current binary perspective of system architecture. The business models suggested for the development of smart cities require a longterm strategic view of system architecture evolution. This article discusses the architectural evolution required to ensure that the rollout and deployment of smart city technologies is smooth through acknowledging and integrating the strengths of both the system architectures proposed.", "title": "" }, { "docid": "e6e65cdee48c6b9606fa14904d176982", "text": "The use of prediction to eliminate or reduce the effects of system delays in Head-Mounted Display systems has been the subject of several recent papers. A variety of methods have been proposed but almost all the analysis has been empirical, making comparisons of results difficult and providing little direction to the designer of new systems. In this paper, we characterize the performance of two classes of head-motion predictors by analyzing them in the frequency domain. The first predictor is a polynomial extrapolation and the other is based on the Kalman filter. Our analysis shows that even with perfect, noise-free inputs, the error in predicted position grows rapidly with increasing prediction intervals and input signal frequencies. Given the spectra of the original head motion, this analysis estimates the spectra of the predicted motion, quantifying a predictor's performance on different systems and applications. Acceleration sensors are shown to be more useful to a predictor than velocity sensors. The methods described will enable designers to determine maximum acceptable system delay based on maximum tolerable error and the characteristics of user motions in the application. CR", "title": "" }, { "docid": "364b82bf3334cf7534088ad63743422e", "text": "Rigid origami is a class of origami whose entire surface remains rigid during folding except at crease lines. Rigid origami finds applications in manufacturing and packaging, such as map folding and solar panel packing. Advances in material science and robotics engineering also enable the realization of self-folding rigid origami and have fueled the interests in computational origami, in particular the issues of foldability, i.e., finding folding steps from a flat sheet of crease patterns to desired folded state. For example, recent computational methods allow rapid simulation of folding process of certain rigid origamis. However, these methods can fail even when the input crease pattern is extremely simple. This paper attempts to address this problem by modeling rigid origami as a kinematic system with closure constraints and solve the foldability problem through a randomized method. Our experimental results show that the proposed method successfully fold several types of rigid origamis that the existing methods fail to fold.", "title": "" }, { "docid": "9ac81079d4e957a87cfec465a4a69a7c", "text": "AIMS\nThe UK has one of the largest systems of immigration detention in Europe.. Those detained include asylum-seekers and foreign national prisoners, groups with a higher prevalence of mental health vulnerabilities compared with the general population. In light of little published research on the mental health status of detainees in immigration removal centres (IRCs), the primary aim of this study was to explore whether it was feasible to conduct psychiatric research in such a setting. A secondary aim was to compare the mental health of those seeking asylum with the rest of the detainees.\n\n\nMETHODS\nCross-sectional study with simple random sampling followed by opportunistic sampling. Exclusion criteria included inadequate knowledge of English and European Union nationality. Six validated tools were used to screen for mental health disorders including developmental disorders like Personality Disorder, Attention Deficit Hyperactivity Disorder (ADHD), Autistic Spectrum Disorder (ASD) and Intellectual Disability, as well as for needs assessment. These were the MINI v6, SAPAS, AQ-10, ASRS, LDSQ and CANFOR. Demographic data were obtained using a participant demographic sheet. Researchers were trained in the use of the screening battery and inter-rater reliability assessed by joint ratings.\n\n\nRESULTS\nA total of 101 subjects were interviewed. Overall response rate was 39%. The most prevalent screened mental disorder was depression (52.5%), followed by personality disorder (34.7%) and post-traumatic stress disorder (20.8%). 21.8% were at moderate to high suicidal risk. 14.9 and 13.9% screened positive for ASD and ADHD, respectively. The greatest unmet needs were in the areas of intimate relationships (76.2%), psychological distress (72.3%) and sexual expression (71.3%). Overall presence of mental disorder was comparable with levels found in prisons. The numbers in each group were too small to carry out any further analysis.\n\n\nCONCLUSION\nIt is feasible to undertake a psychiatric morbidity survey in an IRC. Limitations of the study include potential selection bias, use of screening tools, use of single-site study, high refusal rates, the lack of interpreters and lack of women and children in study sample. Future studies should involve the in-reach team to recruit participants and should be run by a steering group consisting of clinicians from the IRC as well as academics.", "title": "" }, { "docid": "2332c8193181b5ad31e9424ca37b0f5a", "text": "The ability to grasp ordinary and potentially never-seen objects is an important feature in both domestic and industrial robotics. For a system to accomplish this, it must autonomously identify grasping locations by using information from various sensors, such as Microsoft Kinect 3D camera. Despite numerous progress, significant work still remains to be done in this field. To this effect, we propose a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations. In contrast to previously proposed approaches that relied on sophisticated regularization or very large datasets, the derived perception system has a fast training phase and can work with small datasets. It is also theoretically founded for dealing with masked-out entries, which are common with 3D sensors. We contribute by presenting a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset. Importantly, experimental results show a performance improvement of 1.69% in detection and 3.16% in recognition over current state-of-the-art convolutional neural network (CNN). Even though nowadays most popular vision-based approach is CNN, this suggests that DLSR is also a viable alternative with interesting advantages that CNN has not.", "title": "" }, { "docid": "b75847420d86f2dfd4d1e43b8f23d449", "text": "Since the inception of Deep Reinforcement Learning (DRL) algorithms, there has been a growing interest in both research and industrial communities in the promising potentials of this paradigm. The list of current and envisioned applications of deep RL ranges from autonomous navigation and robotics to control applications in the critical infrastructure, air traffic control, defense technologies, and cybersecurity. While the landscape of opportunities and the advantages of deep RL algorithms are justifiably vast, the security risks and issues in such algorithms remain largely unexplored. To facilitate and motivate further research on these critical challenges, this paper presents a foundational treatment of the security problem in DRL. We formulate the security requirements of DRL, and provide a high-level threat model through the classification and identification of vulnerabilities, attack vectors, and adversarial capabilities. Furthermore, we present a review of current literature on security of deep RL from both offensive and defensive perspectives. Lastly, we enumerate critical research venues and open problems in mitigation and prevention of intentional attacks against deep RL as a roadmap for further research in this area.", "title": "" }, { "docid": "dd0562e604e6db2c31132f1ffcd94d4f", "text": "a r t i c l e i n f o Keywords: Data quality Utility Cost–benefit analysis Data warehouse CRM Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies. Maintaining data resources at a high quality level is a critical task in managing organizational information systems (IS). Data quality (DQ) significantly affects IS adoption and the success of data utilization [10,26]. Data quality management (DQM) has been examined from a variety of technical, functional, and organizational perspectives [22]. Achieving high quality is the primary objective of DQM efforts, and much research in DQM focuses on methodologies, tools and techniques for improving quality. Recent studies (e.g., [14,19]) have suggested that high DQ, although having clear merits, should not necessarily be the only objective to consider when assessing DQM alternatives, particularly in an IS that manages large datasets. As shown in these studies, maximizing economic benefits, based on the value gained from improving quality, and the costs involved in improving quality, may conflict with the target of achieving a high data quality level. Such findings inspire the need to link DQM decisions to economic outcomes and tradeoffs, with the goal of identifying more cost-effective DQM solutions. The quality of organizational data is rarely perfect as data, when captured and stored, may suffer from such defects as inaccuracies and missing values [22]. Its quality may further deteriorate as the real-world items that the data describes may change over time (e.g., a customer changing address, profession, and/or marital status). A plethora of studies have underscored the negative effect of low …", "title": "" }, { "docid": "ced57c0315603691bd2c185bcb83e6c5", "text": "There has been a good amount of progress in sentiment analysis over the past 10 years, including the proposal of new methods and the creation of benchmark datasets. In some papers, however, there is a tendency to compare models only on one or two datasets, either because of time restraints or because the model is tailored to a specific task. Accordingly, it is hard to understand how well a certain model generalizes across different tasks and datasets. In this paper, we contribute to this situation by comparing several models on six different benchmarks, which belong to different domains and additionally have different levels of granularity (binary, 3-class, 4-class and 5-class). We show that BiLSTMs perform well across datasets and that both LSTMs and Bi-LSTMs are particularly good at fine-grained sentiment tasks (i. e., with more than two classes). Incorporating sentiment information into word embeddings during training gives good results for datasets that are lexically similar to the training data. With our experiments, we contribute to a better understanding of the performance of different model architectures on different data sets. Consequently, we detect novel state-of-the-art results on the SenTube datasets.", "title": "" }, { "docid": "1d29d30089ffd9748c925a20f8a1216e", "text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.", "title": "" }, { "docid": "7f84e215df3d908249bde3be7f2b3cab", "text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.", "title": "" }, { "docid": "9da15e2851124d6ca1524ba28572f922", "text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.", "title": "" }, { "docid": "1cb39c8a2dd05a8b2241c9c795ca265f", "text": "An ever growing interest and wide adoption of Internet of Things (IoT) and Web technologies are unleashing a true potential of designing a broad range of high-quality consumer applications. Smart cities, smart buildings, and e-health are among various application domains which are currently benefiting and will continue to benefit from IoT and Web technologies in a foreseeable future. Similarly, semantic technologies have proven their effectiveness in various domains and a few among multiple challenges which semantic Web technologies are addressing are to (i) mitigate heterogeneity by providing semantic inter-operability, (ii) facilitate easy integration of data application, (iii) deduce and extract new knowledge to build applications providing smart solutions, and (iv) facilitate inter-operability among various data processes including representation, management and storage of data. In this tutorial, our focus will be on the combination of Web technologies, Semantic Web, and IoT technologies and we will present to our audience that how a merger of these technologies is leading towards an evolution from IoT to Web of Things (WoT) to Semantic Web of Things. This tutorial will introduce the basics of Internet of Things, Web of Things and Semantic Web and will demonstrate tools and techniques designed to enable the rapid development of semantics-based Web of Things applications. One key aspect of this tutorial is to familiarize its audience with the open source tools designed by different semantic Web, IoT and WoT based projects and provide the audience a rich hands-on experience to use these tools and build smart applications with minimal efforts. Thus, reducing the learning curve to its maximum. We will showcase real-world use case scenarios which are designed using semantically-enabled WoT frameworks (e.g. CityPulse, FIESTA-IoT and M3).", "title": "" }, { "docid": "c4c3a9572659543c5cd5d1bb50a13bee", "text": "Optic disc (OD) is a key structure in retinal images. It serves as an indicator to detect various diseases such as glaucoma and changes related to new vessel formation on the OD in diabetic retinopathy (DR) or retinal vein occlusion. OD is also essential to locate structures such as the macula and the main vascular arcade. Most existing methods for OD localization are rule-based, either exploiting the OD appearance properties or the spatial relationship between the OD and the main vascular arcade. The detection of OD abnormalities has been performed through the detection of lesions such as hemorrhaeges or through measuring cup to disc ratio. Thus these methods result in complex and inflexible image analysis algorithms limiting their applicability to large image sets obtained either in epidemiological studies or in screening for retinal or optic nerve diseases. In this paper, we propose an end-to-end supervised model for OD abnormality detection. The most informative features of the OD are learned directly from retinal images and are adapted to the dataset at hand. Our experimental results validated the effectiveness of this current approach and showed its potential application.", "title": "" }, { "docid": "fd897f886b24b2fc7d877954d5c004cd", "text": "In this paper, we developed a detailed mathematical model of dual action pneumatic actuators controlled with proportional spool valves. Effects of nonlinear flow through the valve, air compressibility in cylinder chambers, leakage between chambers, end of stroke inactive volume, and time delay and attenuation in the pneumatic lines were carefully considered. System identification, numerical simulation and model validation experiments were conducted for two types of air cylinders and different connecting tubes length, showing very good agreement. This mathematical model will be used in the development of high performance nonlinear force controllers, with applications in teleoperation, haptic interfaces, and robotics.", "title": "" }, { "docid": "3a4d51387f8fcb4add9c5662dcc08c41", "text": "Pulse transformer is always used to be the isolator between gate driver and power MOSFET. There are many topologies about the peripheral circuit. This paper proposes a new topology circuit that uses pulse transformer to transfer driving signal and driving power, energy storage capacitor to supply secondary side power and negative voltage. Without auxiliary power source, it can realize rapidly switch and off state with negative voltage. And a simulation model has been used to verify it. The simulation results prove that the new driver has a better anti-interference, faster switching speed, lower switching loss, and higher reliability than the current drive circuits.", "title": "" }, { "docid": "8a22f454a657768a3d5fd6e6ec743f5f", "text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.", "title": "" }, { "docid": "895f912a24f00984922c586880f77dee", "text": "Massive multiple-input multiple-output technology has been considered a breakthrough in wireless communication systems. It consists of equipping a base station with a large number of antennas to serve many active users in the same time-frequency block. Among its underlying advantages is the possibility to focus transmitted signal energy into very short-range areas, which will provide huge improvements in terms of system capacity. However, while this new concept renders many interesting benefits, it brings up new challenges that have called the attention of both industry and academia: channel state information acquisition, channel feedback, instantaneous reciprocity, statistical reciprocity, architectures, and hardware impairments, just to mention a few. This paper presents an overview of the basic concepts of massive multiple-input multiple-output, with a focus on the challenges and opportunities, based on contemporary research.", "title": "" }, { "docid": "4a0bbd8fad443294a8da61cb976a537c", "text": "The microservice architecture (MSA) is an emerging cloud software system, which provides fine-grained, self-contained service components (microservices) used in the construction of complex software systems. DevOps techniques are commonly used to automate the process of development and operation through continuous integration and continuous deployment. Monitoring software systems created by DevOps, makes it possible for MSA to obtain the feedback necessary to improve the system quickly and easily. Nonetheless, systematic, SDLC-driven methods (SDLC: software development life cycle) are lacking to facilitate the migration of software systems from a traditional monolithic architecture to MSA. Therefore, this paper proposes a migration process based on SDLC, including all of the methods and tools required during design, development, and implementation. The mobile application, EasyLearn, was used as an illustrative example to demonstrate the efficacy of the proposed migration process. We believe that this paper could provide valuable references for other development teams seeking to facilitate the migration of existing applications to MSA.", "title": "" }, { "docid": "0f24b6c36586505c1f4cc001e3ddff13", "text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.", "title": "" } ]
scidocsrr
a51e3155a8fb6d3093bc43a57f7c6dcf
Analyzing and Detecting Opinion Spam on a Large-scale Dataset via Temporal and Spatial Patterns
[ { "docid": "381ce2a247bfef93c67a3c3937a29b5a", "text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.", "title": "" }, { "docid": "646097feed29f603724f7ec6b8bbeb8b", "text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.", "title": "" }, { "docid": "0cf81998c0720405e2197c62afa08ee7", "text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media", "title": "" } ]
[ { "docid": "3bd639feecf4194c73c3e20ae4ef8203", "text": "We present an optimized implementation of the Fan-Vercauteren variant of Brakerski’s scale-invariant homomorphic encryption scheme. Our algorithmic improvements focus on optimizing decryption and homomorphic multiplication in the Residue Number System (RNS), using the Chinese Remainder Theorem (CRT) to represent and manipulate the large coefficients in the ciphertext polynomials. In particular, we propose efficient procedures for scaling and CRT basis extension that do not require translating the numbers to standard (positional) representation. Compared to the previously proposed RNS design due to Bajard et al. [3], our procedures are simpler and faster, and introduce a lower amount of noise. We implement our optimizations in the PALISADE library and evaluate the runtime performance for the range of multiplicative depths from 1 to 100. For example, homomorphic multiplication for a depth-20 setting can be executed in 62 ms on a modern server system, which is already practical for some outsourced-computing applications. Our algorithmic improvements can also be applied to other scale-invariant homomorphic encryption schemes, such as YASHE.", "title": "" }, { "docid": "e2a54639dfa4d1a828be814982ceb0a1", "text": "Large-scale data analysis lies in the core of modern enterprises and scientific research. With the emergence of cloud computing, the use of an analytical query processing infrastructure (e.g., Amazon EC2) can be directly mapped to monetary value. MapReduce has been a popular framework in the context of cloud computing, designed to serve long running queries (jobs) which can be processed in batch mode. Taking into account that different jobs often perform similar work, there are many opportunities for sharing. In principle, sharing similar work reduces the overall amount of work, which can lead to reducing monetary charges incurred while utilizing the processing infrastructure. In this paper we propose a sharing framework tailored to MapReduce. Our framework, MRShare, transforms a batch of queries into a new batch that will be executed more efficiently, by merging jobs into groups and evaluating each group as a single query. Based on our cost model for MapReduce, we define an optimization problem and we provide a solution that derives the optimal grouping of queries. Experiments in our prototype, built on top of Hadoop, demonstrate the overall effectiveness of our approach and substantial savings.", "title": "" }, { "docid": "aea8ac7970162655d5616f5b3985430c", "text": "The growing use of convolutional neural networks (CNN) for a broad range of visual tasks, including tasks involving fine details, raises the problem of applying such networks to a large field of view, since the amount of computations increases significantly with the number of pixels. To deal effectively with this difficulty, we develop and compare methods of using CNNs for the task of small target localization in natural images, given a limited ”budget” of samples to form an image. Inspired in part by human vision, we develop and compare variable sampling schemes, with peak resolution at the center and decreasing resolution with eccentricity, applied iteratively by re-centering the image at the previous predicted target location. The results indicate that variable resolution models significantly outperform constant resolution models. Surprisingly, variable resolution models and in particular multi-channel models, outperform the optimal, ”budget-free” full-resolution model, using only 5% of the samples.", "title": "" }, { "docid": "74c6600ea1027349081c08c687119ee3", "text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.", "title": "" }, { "docid": "d1afaada6bf5927d9676cee61d3a1d49", "text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "title": "" }, { "docid": "bdbb97522eea6cb9f8e11f07c2e83282", "text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.", "title": "" }, { "docid": "f4db297c70b1aba64ce3ed17b0837859", "text": "Despite the success of the automatic speech recognition framework in its own application field, its adaptation to the problem of acoustic event detection has resulted in limited success. In this paper, instead of treating the problem similar to the segmentation and classification tasks in speech recognition, we pose it as a regression task and propose an approach based on random forest regression. Furthermore, event localization in time can be efficiently handled as a joint problem. We first decompose the training audio signals into multiple interleaved superframes which are annotated with the corresponding event class labels and their displacements to the temporal onsets and offsets of the events. For a specific event category, a random-forest regression model is learned using the displacement information. Given an unseen superframe, the learned regressor will output the continuous estimates of the onset and offset locations of the events. To deal with multiple event categories, prior to the category-specific regression phase, a superframe-wise recognition phase is performed to reject the background superframes and to classify the event superframes into different event categories. While jointly posing event detection and localization as a regression problem is novel, the superior performance on two databases ITC-Irst and UPC-TALP demonstrates the efficiency and potential of the proposed approach.", "title": "" }, { "docid": "3e9f338da297c5173cf075fa15cd0a2e", "text": "Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.", "title": "" }, { "docid": "eff4f126e50447f872109549d060fbc8", "text": "Many combinatorial problems are NP-complete for general graphs. However, when restricted to series–parallel graphs or partial k-trees, many of these problems can be solved in polynomial time, mostly in linear time. On the other hand, very few problems are known to be NP-complete for series–parallel graphs or partial k-trees. These include the subgraph isomorphism problem and the bandwidth problem. However, these problems are NP-complete even for trees. In this paper, we show that the edge-disjoint paths problem is NP-complete for series–parallel graphs and for partial 2-trees although the problem is trivial for trees and can be solved for outerplanar graphs in polynomial time. ? 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "be09a9be6ef80f694c34546767300b41", "text": "Nipple-sparing mastectomy (NSM) is increasingly popular as a procedure for the treatment of breast cancer and as a prophylactic procedure for those at high risk of developing the disease. However, it remains a controversial option due to questions regarding its oncological safety and concerns regarding locoregional recurrence. This systematic review with a pooled analysis examines the current literature regarding NSM, including locoregional recurrence and complication rates. Systematic electronic searches were conducted using the PubMed database and the Ovid database for studies reporting the indications for NSM and the subsequent outcomes. Studies between January 1970 and January 2015 (inclusive) were analysed if they met the inclusion criteria. Pooled descriptive statistics were performed. Seventy-three studies that met the inclusion criteria were included in the analysis, yielding 12,358 procedures. After a mean follow up of 38 months (range, 7.4-156 months), the overall pooled locoregional recurrence rate was 2.38%, the overall complication rate was 22.3%, and the overall incidence of nipple necrosis, either partial or total, was 5.9%. Significant heterogeneity was found among the published studies and patient selection was affected by tumour characteristics. We concluded that NSM appears to be an oncologically safe option for appropriately selected patients, with low rates of locoregional recurrence. For NSM to be performed, tumours should be peripherally located, smaller than 5 cm in diameter, located more than 2 cm away from the nipple margin, and human epidermal growth factor 2-negative. A separate histopathological examination of the subareolar tissue and exclusion of malignancy at this site is essential for safe oncological practice. Long-term follow-up studies and prospective cohort studies are required in order to determine the best reconstructive methods.", "title": "" }, { "docid": "767b6a698ee56a4859c21f70f52b2b81", "text": "This article surveyed the main neuromarketing techniques used in the world and the practical results obtained. Specifically, the objectives are (1) to identify the main existing definitions of neuromarketing; (2) to identify the importance and the potential contributions of neuromarketing; (3) to demonstrate the advantages of neuromarketing as a marketing research tool compared to traditional research methods; (4) to identify the ethical issues involved with neuromarketing research; (5) to present the main neuromarketing techniques that are being used in the development of marketing research; (6) to present studies in which neuromarketing research techniques were used; and (7) to identify the main limitations of neuromarketing. The results obtained allow an understanding of the ways to develop, store, Journal of Management Research ISSN 1941-899X 2014, Vol. 6, No. 2 www.macrothink.org/jmr 202 retrieve and use information about consumers, as well as ways to develop the field of neuromarketing. In addition to offering theoretical support for neuromarketing, this article discusses business cases, implementation and achievements.", "title": "" }, { "docid": "e93eaa695003cb409957e5c7ed19bf2a", "text": "Prominent research argues that consumers often use personal budgets to manage self-control problems. This paper analyzes the link between budgeting and selfcontrol problems in consumption-saving decisions. It shows that the use of goodspecific budgets depends on the combination of a demand for commitment and the demand for flexibility resulting from uncertainty about intratemporal trade-offs between goods. It explains the subtle mechanism which renders budgets useful commitments, their interaction with minimum-savings rules (another widely-studied form of commitment), and how budgeting depends on the intensity of self-control problems. This theory matches several empirical findings on personal budgeting. JEL CLASSIFICATION: D23, D82, D86, D91, E62, G31", "title": "" }, { "docid": "47c05e54488884854e6bcd5170ed65e8", "text": "This work is about a novel methodology for window detection in urban environments and its multiple use in vision system applications. The presented method for window detection includes appropriate early image processing, provides a multi-scale Haar wavelet representation for the determination of image tiles which is then fed into a cascaded classifier for the task of window detection. The classifier is learned from a Gentle Adaboost driven cascaded decision tree on masked information from training imagery and is tested towards window based ground truth information which is together with the original building image databases publicly available. The experimental results demonstrate that single window detection is to a sufficient degree successful, e.g., for the purpose of building recognition, and, furthermore, that the classifier is in general capable to provide a region of interest operator for the interpretation of urban environments. The extraction of this categorical information is beneficial to index into search spaces for urban object recognition as well as aiming towards providing a semantic focus for accurate post-processing in 3D information processing systems. Targeted applications are (i) mobile services on uncalibrated imagery, e.g. , for tourist guidance, (ii) sparse 3D city modeling, and (iii) deformation analysis from high resolution imagery.", "title": "" }, { "docid": "63baa6371fc07d3ef8186f421ddf1070", "text": "With the first few words of Neural Networks and Intellect: Using Model-Based Concepts, Leonid Perlovsky embarks on the daring task of creating a mathematical concept of “the mind.” The content of the book actually exceeds even the most daring of expectations. A wide variety of concepts are linked together intertwining the development of artificial intelligence, evolutionary computation, and even the philosophical observations ranging from Aristotle and Plato to Kant and Gvdel. Perlovsky discusses fundamental questions with a number of engineering applications to filter them through philosophical categories (both ontological and epistemological). In such a fashion, the inner workings of the human mind, consciousness, language-mind relationships, learning, and emotions are explored mathematically in amazing details. Perlovsky even manages to discuss the concept of beauty perception in mathematical terms. Beginners will appreciate that Perlovsky starts with the basics. The first chapter contains an introduction to probability, statistics, and pattern recognition, along with the intuitive explanation of the complicated mathematical concepts. The second chapter reviews numerous mathematical approaches, algorithms, neural networks, and the fundamental mathematical ideas underlying each method. It analyzes fundamental limitations of the nearest neighbor methods and the simple neural network. Vapnik’s statistical learning theory, support vector machines, and Grossberg’s neural field theories are clearly explained. Roles of hierarchical organization and evolutionary computation are analyzed. Even experts in the field might find interesting the relationships among various algorithms and approaches. Fundamental mathematical issues include origins of combinatorial complexity (CC) of many algorithms and neural networks (operations or training) and its relationship to di-", "title": "" }, { "docid": "8053e52a227757090de0a88b80055e8c", "text": "INTRODUCTION\nWe examined US adults' understanding of a Nutrition Facts panel (NFP), which requires health literacy (ie, prose, document, and quantitative literacy skills), and the association between label understanding and dietary behavior.\n\n\nMETHODS\nData were from the Health Information National Trends Survey, a nationally representative survey of health information seeking among US adults (N = 3,185) conducted from September 6, 2013, through December 30, 2013. Participants viewed an ice cream nutrition label and answered 4 questions that tested their ability to apply basic arithmetic and understanding of percentages to interpret the label. Participants reported their intake of sugar-sweetened soda, fruits, and vegetables. Regression analyses tested associations among label understanding, demographic characteristics, and self-reported dietary behaviors.\n\n\nRESULTS\nApproximately 24% of people could not determine the calorie content of the full ice-cream container, 21% could not estimate the number of servings equal to 60 g of carbohydrates, 42% could not estimate the effect on daily calorie intake of foregoing 1 serving, and 41% could not calculate the percentage daily value of calories in a single serving. Higher scores for label understanding were associated with consuming more vegetables and less sugar-sweetened soda, although only the association with soda consumption remained significant after adjusting for demographic factors.\n\n\nCONCLUSION\nMany consumers have difficulty interpreting nutrition labels, and label understanding correlates with self-reported dietary behaviors. The 2016 revised NFP labels may address some deficits in consumer understanding by eliminating the need to perform certain calculations (eg, total calories per package). However, some tasks still require the ability to perform calculations (eg, percentage daily value of calories). Schools have a role in teaching skills, such as mathematics, needed for nutrition label understanding.", "title": "" }, { "docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea", "text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.", "title": "" }, { "docid": "6c45d7b4a7732da4441261f7f1e9e42c", "text": "In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency.", "title": "" }, { "docid": "140815c8ccd62d0169fa294f6c4994b8", "text": "Six specific personality traits – playfulness, chase-proneness, curiosity/fearlessness, sociability, aggressiveness, and distance-playfulness – and a broad boldness dimension have been suggested for dogs in previous studies based on data collected in a standardized behavioural test (‘‘dog mentality assessment’’, DMA). In the present study I investigated the validity of the specific traits for predicting typical behaviour in everyday life. A questionnaire with items describing the dog’s typical behaviour in a range of situations was sent to owners of dogs that had carried out the DMA behavioural test 1–2 years earlier. Of the questionnaires that were sent out 697 were returned, corresponding to a response rate of 73.3%. Based on factor analyses on the questionnaire data, behavioural factors in everyday life were suggested to correspond to the specific personality traits from the DMA. Correlation analyses suggested construct validity for the traits playfulness, curiosity/ fearlessness, sociability, and distance-playfulness. Chase-proneness, which I expected to be related to predatory behaviour in everyday life, was instead related to human-directed play interest and nonsocial fear. Aggressiveness was the only trait from the DMA with low association to all of the behavioural factors from the questionnaire. The results suggest that three components of dog personality are measured in the DMA: (1) interest in playing with humans; (2) attitude towards strangers (interest in, fear of, and aggression towards); and (3) non-social fearfulness. These three components correspond to the traits playfulness, sociability, and curiosity/fearlessness, respectively, all of which were found to be related to a higher-order shyness–boldness dimension. www.elsevier.com/locate/applanim Applied Animal Behaviour Science 91 (2005) 103–128 * Present address: Department of Anatomy and Physiology, Faculty of Veterinary Medicine and Animal Science, Swedish University of Agricultural Sciences, Box 7011, SE-750 07 Uppsala, Sweden. Tel.: +46 18 67 28 21; fax: +46 18 67 21 11. E-mail address: kenth.svartberg@afys.slu.se. 0168-1591/$ – see front matter # 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.applanim.2004.08.030 Chase-proneness and distance-playfulness seem to be mixed measures of these personality components, and are not related to any additional components. Since the time between the behavioural test and the questionnaire was 1–2 years, the results indicate long-term consistency of the personality components. Based on these results, the DMA seems to be useful in predicting behavioural problems that are related to social and non-social fear, but not in predicting other potential behavioural problems. However, considering this limitation, the test seems to validly assess important aspects of dog personality, which supports the use of the test as an instrument in dog breeding and in selection of individual dogs for different purposes. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d6794e4917896ba1040b4a83f8bd69b4", "text": "There has been little work on computational grammars for Amh aric or other Ethio-Semitic languages and their use for pars ing and generation. This paper introduces a grammar for a fragment o f Amharic within the Extensible Dependency Grammar (XDG) fr amework of Debusmann. A language such as Amharic presents special ch allenges for the design of a dependency grammar because of th complex morphology and agreement constraints. The paper describes how a morphological analyzer for the language can be integra t d into the grammar, introduces empty nodes as a solution to the problem of null subjects and objects, and extends the agreement prin ci le of XDG in several ways to handle verb agreement with objects as well as subjects and the constraints governing relative clause v erbs. It is shown that XDG’s multiple dimensions lend themselves to a new appr oach to relative clauses in the language. The introduced ext ensions to XDG are also applicable to other Ethio-Semitic languages.", "title": "" }, { "docid": "fcce2e75108497f0e8e37300d6ad335c", "text": "The authors performed a meta-analysis of studies examining the association between polymorphisms in the 5,10-methylenetetrahydrofolate reductase (MTHFR) gene, including MTHFR C677T and A1298C, and common psychiatric disorders, including unipolar depression, anxiety disorders, bipolar disorder, and schizophrenia. The primary comparison was between homozygote variants and the wild type for MTHFR C677T and A1298C. For unipolar depression and the MTHFR C677T polymorphism, the fixed-effects odds ratio for homozygote variants (TT) versus the wild type (CC) was 1.36 (95% confidence interval (CI): 1.11, 1.67), with no residual between-study heterogeneity (I(2) = 0%)--based on 1,280 cases and 10,429 controls. For schizophrenia and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.44 (95% CI: 1.21, 1.70), with low heterogeneity (I(2) = 42%)--based on 2,762 cases and 3,363 controls. For bipolar disorder and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.82 (95% CI: 1.22, 2.70), with low heterogeneity (I(2) = 42%)-based on 550 cases and 1,098 controls. These results were robust to various sensitively analyses. This meta-analysis demonstrates an association between the MTHFR C677T variant and depression, schizophrenia, and bipolar disorder, raising the possibility of the use of folate in treatment and prevention.", "title": "" } ]
scidocsrr
9d3f2d87b0507466ac9d7abc7b02097a
The Maximum Clique Problem
[ { "docid": "3f8ed9f5b015f50989ebde22329e6e7c", "text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.", "title": "" } ]
[ { "docid": "04fd58e1d49a65466b00d55177dc936c", "text": "In this paper we introduce multidimensional visualization and interaction techniques that are an extension to related work in parallel histograms and dynamic querying. Bargrams are, in effect, histograms whose bars have been tipped over and lined up end-to-end. We discuss affordances of parallel bargrams in the context of systems that support consumer-based information exploration and choice based on the attributes of the items in the choice set. Our tool called EZChooser has enabled a number of prototypes in such domains as Internet shopping, investment decisions, college choice, and so on, and a limited version has been deployed for car shopping. Evaluations of the techniques include an experiment indicating that trained users prefer EZChooser over static tables for choice tasks among sets of 50 items with 7-9 attributes.", "title": "" }, { "docid": "77c2f29beedae831d3bf771bb0388484", "text": "Two types of 3-CRU translational parallel manipulators (TPM) are compared and investigated in this paper. The two 3-CRU TPMs have identical kinematics, but some differences exist in terms of the chain arrangement, one is in fully symmetrical chain arrangement, called symmetrical 3-CRU TPM, the other one is in asymmetrical chain arrangement, called asymmetrical 3-CRU TPM. This paper focuses on discussing the differences between the two 3-CRU TPMs in kinematics, workspace and performance. This study provides insights into parallel manipulators with identical kinematics arranged differently.", "title": "" }, { "docid": "643b411e7dd4da05d16f8997d7735161", "text": "The concept of critical thinking has been featured in nursing literature for the past 20 years. It has been described but not defined by both the American Association of Colleges of Nursing and the National League for Nursing, although their corresponding accreditation bodies require that critical thinking be included in nursing curricula. In addition, there is no reliable or valid measurement tool for critical thinking ability in nursing. As a result, there is a lack of research support for the assumptions that critical thinking can be learned and that critical thinking ability improves clinical competence. Brookfield suggested that commitments should be made only after a period of critically reflective analysis, during which the congruence between perceptions and reality are examined. In an evidence-based practice profession, we, as nurse educators, need to ask ourselves how we can defend our assumptions that critical thinking can be learned and that critical thinking improves the quality of nursing practice, especially when there is virtually no consensus on a definition.", "title": "" }, { "docid": "5679745eba6c1458c8d6eb5d96f1af18", "text": "Degeneracy study of the forward kinematics of planar 3-RPR parallel manipulators This paper investigates two situations in which the forward kinematics of planar 3-RPR parallel manipulators degenerates. These situations have not been addressed before. The first degeneracy arises when the three input joint variables ρ 1 , ρ 2 and ρ 3 satisfy a certain relationship. This degeneracy yields a double root of the characteristic polynomial in tan(/ 2) t ϕ = , which could be erroneously interpreted as two coalesce assembly modes. But, unlike what arises in non-degenerate cases, this double root yields two sets of solutions for the position coordinates (x, y) of the platform. In the second situation, we show that the forward kinematics degenerates over the whole joint space if the base and platform triangles are congruent and the platform triangle is rotated by 180 deg about one of its sides. For these \" degenerate \" manipulators, which are defined here for the first time, the forward kinematics is reduced to the solution of a 3 rd-degree polynomial and a quadratics in sequence. Such manipulators constitute, in turn, a new family of analytic planar manipulators that would be more suitable for industrial applications. 1 Introduction Solving the forward kinematic problem of a parallel manipulator often leads to complex equations and non analytic solutions, even when considering planar 3-DOF parallel manipulators [1]. For these planar manipulators, Hunt showed that the forward kinematics admits at most 6 solutions [2] and several authors [3, 4] have shown independently that their forward kinematics can be reduced as the solution of a characteristic polynomial of degree 6. In [3], a set of two linear equations in the position coordinates (x, y) of the moving platform is first established, which makes it possible to write x and y as function of the sine and cosine of the orientation angle ϕ of the moving platform. Substituting these expressions of x and y into one of the constraint equations of the manipulator and using the tan-half angle substitution leads to a 6 th-degree polynomial in tan(/ 2) t ϕ =. Conditions under which the degree of this characteristic polynomial decreases were investigated in [5, 6]. Four distinct cases were found, namely, (i) manipulators for which two of the joints coincide (ii) manipulators with similar aligned platforms (iii) manipulators with nonsimilar aligned platforms and, (iv) manipulators with similar triangular platforms. For cases (i), (ii) and …", "title": "" }, { "docid": "cf52fd01af4e01f28eeb14e0c6bce7e9", "text": "Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations.", "title": "" }, { "docid": "6b6805fa87d31f374a1db8da8acc2163", "text": "BACKGROUND\nWhile Web-based interventions can be efficacious, engaging a target population's attention remains challenging. We argue that strategies to draw such a population's attention should be tailored to meet its needs. Increasing user engagement in online suicide intervention development requires feedback from this group to prevent people who have suicide ideation from seeking treatment.\n\n\nOBJECTIVE\nThe goal of this study was to solicit feedback on the acceptability of the content of messaging from social media users with suicide ideation. To overcome the common concern of lack of engagement in online interventions and to ensure effective learning from the message, this research employs a customized design of both content and length of the message.\n\n\nMETHODS\nIn study 1, 17 participants suffering from suicide ideation were recruited. The first (n=8) group conversed with a professional suicide intervention doctor about its attitudes and suggestions for a direct message intervention. To ensure the reliability and consistency of the result, an identical interview was conducted for the second group (n=9). Based on the collected data, questionnaires about this intervention were formed. Study 2 recruited 4222 microblog users with suicide ideation via the Internet.\n\n\nRESULTS\nThe results of the group interviews in study 1 yielded little difference regarding the interview results; this difference may relate to the 2 groups' varied perceptions of direct message design. However, most participants reported that they would be most drawn to an intervention where they knew that the account was reliable. Out of 4222 microblog users, we received responses from 725 with completed questionnaires; 78.62% (570/725) participants were not opposed to online suicide intervention and they valued the link for extra suicide intervention information as long as the account appeared to be trustworthy. Their attitudes toward the intervention and the account were similar to those from study 1, and 3 important elements were found pertaining to the direct message: reliability of account name, brevity of the message, and details of the phone numbers of psychological intervention centers and psychological assessment.\n\n\nCONCLUSIONS\nThis paper proposed strategies for engaging target populations in online suicide interventions.", "title": "" }, { "docid": "a16d3b1514a17e05c0a6fd375cd30f01", "text": "Motivated by the detection of prohibited objects in carry-on luggage as a part of avionic security screening, we develop a CNN-based object detection approach for multi-view X-ray image data. Our contributions are two-fold. First, we introduce a novel multi-view pooling layer to perform a 3D aggregation of 2D CNN-features extracted from each view. To that end, our pooling layer exploits the known geometry of the imaging system to ensure geometric consistency of the feature aggregation. Second, we introduce an end-to-end trainable multi-view detection pipeline based on Faster R-CNN, which derives the region proposals and performs the final classification in 3D using these aggregated multi-view features. Our approach shows significant accuracy gains compared to single-view detection while even being more efficient than performing single-view detection in each view.", "title": "" }, { "docid": "6a2380bdabdbe25d8c335ca077790bf1", "text": "Current generation electronic health records suffer a number of problems that make them inefficient and associated with poor clinical satisfaction. Digital scribes or intelligent documentation support systems, take advantage of advances in speech recognition, natural language processing and artificial intelligence, to automate the clinical documentation task currently conducted by humans. Whilst in their infancy, digital scribes are likely to evolve through three broad stages. Human led systems task clinicians with creating documentation, but provide tools to make the task simpler and more effective, for example with dictation support, semantic checking and templates. Mixed-initiative systems are delegated part of the documentation task, converting the conversations in a clinical encounter into summaries suitable for the electronic record. Computer-led systems are delegated full control of documentation and only request human interaction when exceptions are encountered. Intelligent clinical environments permit such augmented clinical encounters to occur in a fully digitised space where the environment becomes the computer. Data from clinical instruments can be automatically transmitted, interpreted using AI and entered directly into the record. Digital scribes raise many issues for clinical practice, including new patient safety risks. Automation bias may see clinicians automatically accept scribe documents without checking. The electronic record also shifts from a human created summary of events to potentially a full audio, video and sensor record of the clinical encounter. Digital scribes promisingly offer a gateway into the clinical workflow for more advanced support for diagnostic, prognostic and therapeutic tasks.", "title": "" }, { "docid": "40e8c13e9f8c8effcdefbd42a0b4e729", "text": "Time-tiling is necessary for the efficient execution of iterative stencil computations. Classical hyper-rectangular tiles cannot be used due to the combination of backward and forward dependences along space dimensions. Existing techniques trade temporal data reuse for inefficiencies in other areas, such as load imbalance, redundant computations, or increased control flow overhead, therefore making it challenging for use with GPUs.\n We propose a time-tiling method for iterative stencil computations on GPUs. Our method does not involve redundant computations. It favors coalesced global-memory accesses, data reuse in local/shared-memory or cache, avoidance of thread divergence, and concurrency, combining hexagonal tile shapes along the time and one spatial dimension with classical tiling along the other spatial dimensions. Hexagonal tiles expose multi-level parallelism as well as data reuse. Experimental results demonstrate significant performance improvements over existing stencil compilers.", "title": "" }, { "docid": "65a8c1faa262cd428045854ffcae3fae", "text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.", "title": "" }, { "docid": "878b5ea8bce77b0bcc07eb9cc5ee312f", "text": "This study aims to facilitate communication of deaf and dumb people by means of a data glove. The Turkish Sign Language translator glove is designed as a portable, low-cost and user-friendly system. The glove is equipped with flexible sensors to detect finger movements and a gyroscope to detect hand motion. The data from the sensors is analyzed with a microcontroller and 18 letters of the Turkish alphabet are successfully obtained. The Turkish Sign Language requires the use of both hands, however, this work aims to detect the entire alphabet with a single glove.", "title": "" }, { "docid": "dffee91cca8a8f2cf95e30d84fc104fa", "text": "It is possible to associate to a hybrid system a single topological space its underlying topological space. Simultaneously, every hybrid system has a graph as its indexing object its underlying graph. Here we discuss the relationship between the underlying topological space of a hybrid system, its underlying graph and Zeno behavior. When each domain is contractible and the reset maps are homotopic to the identity map, the homology of the underlying topological space is isomorphic to the homology of the underlying graph; the nonexistence of Zeno is implied when the first homology is trivial. Moreover, the first homology is trivial when the null space of the incidence matrix is trivial. The result is an easy way to verify the nonexistence of Zeno behavior.", "title": "" }, { "docid": "1dc8b67323637afe08e7004d462bb793", "text": "With the WEBSOM method a textual document collection may be organized onto a graphical map display that provides an overview of the collection and facilitates interactive browsing. Interesting documents can be located on the map using a content-directed search. Each document is encoded as a histogram of word categories which are formed by the self-organizing map (SOM) algorithm based on the similarities in the contexts of the words. The encoded documents are organized on another self-organizing map, a document map, on which nearby locations contain similar documents. Special consideration is given to the computation of very large document maps which is possible with general-purpose computers if the dimensionality of the word category histograms is first reduced with a random mapping method and if computationally efficient algorithms are used in computing the SOMs. ( 1998 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "28f106c6d6458f619cdc89967d5648cd", "text": "Term graphs constructed from document collections as well as external resources, such as encyclopedias (DBpedia) and knowledge bases (Freebase and ConceptNet), have been individually shown to be effective sources of semantically related terms for query expansion, particularly in case of difficult queries. However, it is not known how they compare with each other in terms of retrieval effectiveness. In this work, we use standard TREC collections to empirically compare the retrieval effectiveness of these types of term graphs for regular and difficult queries. Our results indicate that the term association graphs constructed from document collections using information theoretic measures are nearly as effective as knowledge graphs for Web collections, while the term graphs derived from DBpedia, Freebase and ConceptNet are more effective than term association graphs for newswire collections. We also found out that the term graphs derived from ConceptNet generally outperformed the term graphs derived from DBpedia and Freebase.", "title": "" }, { "docid": "c9c5c8441ef15c512afbe4e6079b4bd0", "text": "Health insurance fraud increases the disorganization and unfairness in our society. Health care fraud leads to substantial losses of money and very costly to health care insurance system. It is horrible because the percentage of health insurance fraud keeps increasing every year in many countries. To address this widespread problem, effective techniques are in need to detect fraudulent claims in health insurance sector. The application of data mining is specifically relevant and it has been successfully applied in medical needs for its reliable precision accuracy and rapid beneficial results. This paper aims to provide a comprehensive survey of the statistical data mining methods applied to detect fraud in health insurance sector.", "title": "" }, { "docid": "e4d734400d4771ec3b8f11cbbeaa1208", "text": "For youth to benefit from many of the developmental opportunities provided by organized programs, they need to not only attend but become psychologically engaged in program activities. This research was aimed at formulating empirically based grounded theory on the processes through which this engagement develops. Longitudinal interviews were conducted with 100 ethnically diverse youth (ages 14–21) in 10 urban and rural arts and leadership programs. Qualitative analysis focused on narrative accounts from the 44 youth who reported experiencing a positive turning point in their motivation or engagement. For 38 of these youth, this change process involved forming a personal connection. Similar to processes suggested by self-determination theory (Ryan & Deci, 2000), forming a personal connection involved youth's progressive integration of personal goals with the goals of program activities. Youth reported developing a connection to 3 personal goals that linked the self with the activity: learning for the future, developing competence, and pursuing a purpose. The role of purpose for many youth suggests that motivational change can be driven by goals that transcend self-needs. These findings suggest that youth need not enter programs intrinsically engaged--motivation can be fostered--and that programs should be creative in helping youth explore ways to form authentic connections to program activities.", "title": "" }, { "docid": "9327a13308cd713bcfb3b4717eaafef0", "text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.", "title": "" }, { "docid": "d21476e4bcdc7b9028369db5c4d0b6d4", "text": "We show here how the use of genetic programming in combination of model checking provides a powerful way to synthesize programs. Whereas classical algorithmic synthesis provides alarming high complexity and undecidability results, the genetic approach provides a surprisingly successful heuristics. We describe several versions of a method for synthesizing sequential and concurrent systems. We show several examples where we used our approach to synthesize, improve and correct code.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" }, { "docid": "72bc688726c5fc26b2dd7e63d3b28ac0", "text": "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "title": "" } ]
scidocsrr
65c91e32c8224ff4a88df830d3c014d1
A Greedy Part Assignment Algorithm for Real-Time Multi-person 2D Pose Estimation
[ { "docid": "7fa9bacbb6b08065ecfe0530f082a391", "text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.", "title": "" }, { "docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae", "text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.", "title": "" }, { "docid": "f1deb9134639fb8407d27a350be5b154", "text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "title": "" }, { "docid": "963b6b2b337541fd741d31b2c8addc8d", "text": "I. Unary terms • Body part detection candidates • Capture distribution of scores over all part classes II. Pairwise terms • Capture part relationships within/across people – proximity: same body part class (c = c) – kinematic relations: different part classes (c!= c) III. Integer Linear Program (ILP) • Substitute zdd cc = xdc xd c ydd ′ to linearize objective • NP-Hard problem solved via branch-and-cut (1% gap) • Linear constraints on 0/1 labelings: plausible poses – uniqueness", "title": "" }, { "docid": "d2acc6d0d2b83392d98e88ec5c9d352a", "text": "Despite of the recent success of neural networks for human pose estimation, current approaches are limited to pose estimation of a single person and cannot handle humans in groups or crowds. In this work, we propose a method that estimates the poses of multiple persons in an image in which a person can be occluded by another person or might be truncated. To this end, we consider multiperson pose estimation as a joint-to-person association problem. We construct a fully connected graph from a set of detected joint candidates in an image and resolve the joint-to-person association and outlier detection using integer linear programming. Since solving joint-to-person association jointly for all persons in an image is an NP-hard problem and even approximations are expensive, we solve the problem locally for each person. On the challenging MPII Human Pose Dataset for multiple persons, our approach achieves the accuracy of a state-of-the-art method, but it is 6,000 to 19,000 times faster.", "title": "" } ]
[ { "docid": "c6baff0d600c76fac0be9a71b4238990", "text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.", "title": "" }, { "docid": "d49e6b7c6da44fae798e94dcb3a90c88", "text": "Given a photo collection of “unconstrained” face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.", "title": "" }, { "docid": "4ee5931bf57096913f7e13e5da0fbe7e", "text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.", "title": "" }, { "docid": "fa34e68369a138cbaaf9ad085803e504", "text": "This paper proposes an optimal rotor design method of an interior permanent magnet synchronous motor (IPMSM) by using a permanent magnet (PM) shape. An IPMSM is a structure in which PMs are buried in an inner rotor. The torque, torque ripple, and safety factor of IPMSM can vary depending on the position of the inserted PMs. To determine the optimal design variables according to the placement of the inserted PMs, parameter analysis was performed. Therefore, a response surface methodology, which is one of the statistical analysis design methods, was used. Among many other response surface methodologies, Box-Behnken design is the most commonly used. For the purpose of this research, Box-Behnken design was used to find the design parameter that can achieve minimum experimental variables of objective function. This paper determines the insert position of the PM to obtain high-torque, low-torque ripple by using a finite-element-method, and this paper obtains an optimal design by using a mechanical stiffness method in which a safety factor is considered.", "title": "" }, { "docid": "2a30aa44df358be7bb27afd0014a07ff", "text": "The adoption of Smart Grid devices throughout utility networks will effect tremendous change in grid operations and usage of electricity over the next two decades. The changes in ways to control loads, coupled with increased penetration of renewable energy sources, offer a new set of challenges in balancing consumption and generation. Increased deployment of energy storage devices in the distribution grid will help make this process happen more effectively and improve system performance. This paper addresses the new types of storage being utilized for grid support and the ways they are integrated into the grid.", "title": "" }, { "docid": "311724eb9eb40e4775a43eabf7318136", "text": "Waveguide-to-microstrip transitions are indispensable in millimeter components to combine planar integrated circuits with waveguide transmission elements for system researches and applications. In this paper, a W band low-loss broadband waveguide-to-microstrip probe transition is designed, optimized, fabricated and measured, which turns out to have very low insertion loss from 85GHz to 105GHz. This probe transition is very suitable for MIC and MMIC module applications around 94GHz.", "title": "" }, { "docid": "d67d126d40af2f23b001e2cbf2a2df30", "text": "Our method for multi-lingual geoparsing uses monolingual tools and resources along with machine translation and alignment to return location words in many languages. Not only does our method save the time and cost of developing geoparsers for each language separately, but also it allows the possibility of a wide range of language capabilities within a single interface. We evaluated our method in our LanguageBridge prototype on location named entities using newswire, broadcast news and telephone conversations in English, Arabic and Chinese data from the Linguistic Data Consortium (LDC). Our results for geoparsing Chinese and Arabic text using our multi-lingual geoparsing method are comparable to our results for geoparsing English text with our English tools. Furthermore, experiments using our machine translation approach results in accuracy comparable to results from the same data that was translated manually.", "title": "" }, { "docid": "bdbe1a235b13d897e167d2c7ce71d7d0", "text": "The transfer or share of knowledge between languages is a popular solution to resource scarcity in NLP. However, the effectiveness of cross-lingual transfer can be challenged by variation in syntactic structures. Frameworks such as Universal Dependencies (UD) are designed to be cross-lingually consistent, but even in carefully designed resources trees representing equivalent sentences may not always overlap. In this paper, we measure cross-lingual syntactic variation, or anisomorphism, in the UD treebank collection, considering both morphological and structural properties. We show that reducing the level of anisomorphism yields consistent gains in cross-lingual transfer tasks. We introduce a source language selection procedure that facilitates effective cross-lingual parser transfer, and propose a typologically driven method for syntactic tree processing which reduces anisomorphism. Our results show the effectiveness of this method for both machine translation and cross-lingual sentence similarity, demonstrating the importance of syntactic structure compatibility for boosting cross-lingual transfer in NLP.", "title": "" }, { "docid": "9095b7af97f9ff8a4258aa89b0ded6b6", "text": "Data augmentation is the process of generating samples by transforming training data, with the target of improving the accuracy and robustness of classifiers. In this paper, we propose a new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation. Specifically, for each sample, our main idea is to seek a small transformation that yields maximal classification loss on the transformed sample. We employ a trust-region optimization strategy, which consists of solving a sequence of linear programs. Our data augmentation scheme is then integrated into a Stochastic Gradient Descent algorithm for training deep neural networks. We perform experiments on two datasets, and show that that the proposed scheme outperforms random data augmentation algorithms in terms of accuracy and robustness, while yielding comparable or superior results with respect to existing selective sampling approaches.", "title": "" }, { "docid": "603a4d4037ce9fc653d46473f9085d67", "text": "In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.", "title": "" }, { "docid": "e2280d602e8110dbaf512d6e187ecd9f", "text": "There are problems in the delimitation/identification of Plectranthus species and this investigation aims to contribute toward solving such problems through structural and histochemical study of the trichomes. Considering the importance of P. zuluensis as restricted to semi-coastal forests of Natal that possess only two fertile stamens not four as the other species of this genus. The objective of this work was to study in detail the distribution, morphology and histochemistry of the foliar trichomes of this species using light and electron microscopy. Distribution and morphology of two types of non-glandular, capitate and peltate glandular trichomes are described on both leaf sides. This study provides a description of the different secretion modes of glandular trichomes. Results of histochemical tests showed a positive reaction to terpenoids, lipids, polysaccharides and phenolics in the glandular trichomes. We demonstrated that the presence, types and structure of glandular and non-glandular trichomes are important systematic criteria for the species delimitation in the genus.", "title": "" }, { "docid": "54bae3ac2087dbc7dcba553ce9f2ef2e", "text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.", "title": "" }, { "docid": "8d720b33cf0d9b2fd71d486323abbfe5", "text": "In the slot-filling paradigm, where a user can refer back to slots in the context during a conversation, the goal of the contextual understanding system is to resolve the referring expressions to the appropriate slots in the context. In large-scale multi-domain systems, this presents two challenges scaling to a very large and potentially unbounded set of slot values, and dealing with diverse schemas. We present a neural network architecture that addresses the slot value scalability challenge by reformulating the contextual interpretation as a decision to carryover a slot from a set of possible candidates. To deal with heterogenous schemas, we introduce a simple data-driven method for transforming the candidate slots. Our experiments show that our approach can scale to multiple domains and provides competitive results over a strong baseline.", "title": "" }, { "docid": "be30cf5e84895ff8750c392cd55071e2", "text": "In the past 50 years, Computational Wind Engineering (CWE) has undergone a successful transition from an emerging field into an increasingly established field in wind engineering research, practice and education. This paper provides a perspective on the past, present and future of CWE. It addresses three key illustrations of the success of CWE: (1) the establishment of CWE as an individual research and application area in wind engineering with its own successful conference series under the umbrella of the International Association of Wind Engineering (IAWE); (2) the increasing range of topics covered in CWE; and (3) the history of overview and review papers in CWE. The paper also outlines some of the earliest achievements in CWE and the resulting development of best practice guidelines. It provides some views on the complementary relationship between reduced-scale wind-tunnel testing and CFD. It re-iterates some important quotes made by CWE and/or CFD researchers in the past, many of which are still equally valid today and which are provided without additional comments, to let the quotes speak for themselves. Next, as application examples to the foregoing sections, the paper provides a more detailed view on CFD simulation of pedestrian-level wind conditions around buildings, CFD simulation of natural ventilation of buildings and CFD simulation of wind-driven rain on building facades. Finally, a non-exhaustive perspective on the future of CWE is provided.", "title": "" }, { "docid": "2213fc1c67a1bf2b2d2a2c1110a30077", "text": "One expects there to be a conceptual analogy between an innovation ecosystem and the biological ecosystems observed in nature. The biological ecosystem is a system that includes all living organisms (biotic factors) in an area as well as its physical environments (abiotic factors) functioning together as a unit. It is characterized by one or more equilibrium states, where a relatively stable set of conditions exist to maintain a population or nutrient exchange at desirable levels. The ecosystem has certain functional characteristics that specifically regulate change or maintain the stability of a desired equilibrium state.", "title": "" }, { "docid": "7639b0eb658d91ee8675d2ee26ef548d", "text": "It is predicted that mobile applications will become an integral part of our lives at the personal and professional level. Mobile payment (MP) is a promising and exciting domain that has been rapidly developing recently, and although it can still be considered in its infancy, great hope is put on it. If MP efforts succeed, they will boost both e- and m-commerce and may be the killer service in 2.5 G and beyond future ambient intelligence infrastructures. This article introduces the mobile payment arena and describes some of the most important mobile payment procedures and consortia that are relevant to the development of mobile payment services. The aim of this work is to introduce the reader to mobile payments, present current concepts and the motivation behind it, and provide an overview of past and current efforts as well as standardization initiatives that guide this rapidly evolving domain.", "title": "" }, { "docid": "7d53708307c7f5683beb2dd734fe494c", "text": "This paper presents results from a study examining the link between the functionality and the comfort of wearable computers. We gave participants two different devices to wear and varied our descriptions of device functionality. Significant differences in desirability and comfort ratings were found between functional conditions, indicating that functionality is a factor of comfort. Differences were also found between device locations (upper arm and upper/mid back) and participant gender.", "title": "" }, { "docid": "3b3dcadb00db43fb38cebe0c5105c25b", "text": "This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-theart performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.", "title": "" }, { "docid": "46ff9e58e6a67d46934161aaf5f5c6b8", "text": "Using shared disk architecture for relational cloud DBMSs enhances their performance and throughput and increases the scalability. In such architectures, transactions are not distributed between database instances and data are not migrated, whereas any database instance can read and access any database object. Lock technology control for concurrent transactions ensures their consistency especially in shared disk architecture, using traditional granularity database locks for cloud database, can cause numerous problems. This paper proposes an optimistic concurrency control algorithm that uses soft locks and minimizes the number of accessed database instances for validating a transaction. It creates a lock manager for all database objects and distributes it over database instances until it does not have to validate the transaction, neither with single database instance if it is owned by only a database instance nor with all database instances if it is replicated on all database instances. The proposed algorithm is evaluated against other cloud concurrency control algorithms and the results confirms its effectiveness.", "title": "" } ]
scidocsrr
543e46f67707f43a77fd0b8f93d7fb71
Improved Particle Swarm Optimization Based K-Means Clustering
[ { "docid": "0160ef86512929e91fc3e5bb3902514e", "text": "In this paper we propose a clustering method based on combination of the particle swarm optimization (PSO) and the k-mean algorithm. PSO algorithm was showed to successfully converge during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, k-means algorithm can achieve faster convergence to optimum solution. At the same time, the convergent accuracy for k-means can be higher than PSO. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with k-means algorithm is proposed we refer to it as PSO-KM algorithm. The algorithm aims to group a given set of data into a user specified number of clusters. We evaluate the performance of the proposed algorithm using five datasets. The algorithm performance is compared to K-means and PSO clustering.", "title": "" } ]
[ { "docid": "5e2b8d3ed227b71869550d739c61a297", "text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.", "title": "" }, { "docid": "934c8f1bbffe43da1482af157754e2b8", "text": "We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.", "title": "" }, { "docid": "125297b97375b979b2ed0e89ee5dba25", "text": "Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this paper presents FIRE, a trust and reputation model that integrates a number of information sources to produce a comprehensive assessment of an agent’s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation to provide trust metrics in most circumstances. FIRE is empirically evaluated and is shown to help agents gain better utility (by effectively selecting appropriate interaction partners) than our benchmarks in a variety of agent populations. It is also shown that FIRE is able to effectively respond to changes that occur in an agent’s environment.", "title": "" }, { "docid": "a9cafa9b8788e3fa8bcdec1a7be49582", "text": "Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.", "title": "" }, { "docid": "799047d8c129b5c67bc838a53e2ac7e7", "text": "This paper proposes a pre-regulator boost converter applied to a dc/dc converter in order to provide power factor correction. The combination of both stages results in a symmetrical switched power supply, which is composed of two symmetrical stages that operate at 100 kHz, as the individual output voltages are equal to +200 V/sub dc/ and -200 V/sub dc/, the total output voltage is 400 Vdc and the total output power is 500 W. The power factor correction IC UC3854 is employed in the control strategy of the boost stage.", "title": "" }, { "docid": "c319fb209f41ba72770e243b8aaf2f15", "text": "The existing studies suggest that if technology is interwoven comprehensively into pedagogy, it can act as a powerful tool for effective learnin g of the elementary students. This study conducted the meta-analysis by integrating the quan titative findings of 122 peer-reviewed academic papers that measured the impact of technol ogy on learning effectiveness of elementary students. The results confirmed that the technology has a medium effect on learning effectiveness of elementary students. Furt her, this study analysed the effect sizes of moderating variables such as domain subject, applic ation type, intervention duration, and learning environment. Finally, the impact of techno logy at different levels of moderating variables has been discussed and the implications f or theory and practice are provided.", "title": "" }, { "docid": "557b718f65e68f3571302e955ddb74d7", "text": "Synthetic aperture radar (SAR) has been an unparalleled tool in cloudy and rainy regions as it allows observations throughout the year because of its all-weather, all-day operation capability. In this paper, the influence of Wenchuan Earthquake on the Sichuan Giant Panda habitats was evaluated for the first time using SAR interferometry and combining data from C-band Envisat ASAR and L-band ALOS PALSAR data. Coherence analysis based on the zero-point shifting indicated that the deforestation process was significant, particularly in habitats along the Min River approaching the epicenter after the natural disaster, and as interpreted by the vegetation deterioration from landslides, avalanches and debris flows. Experiments demonstrated that C-band Envisat ASAR data were sensitive to vegetation, resulting in an underestimation of deforestation; in contrast, L-band PALSAR data were capable of evaluating the deforestation process owing to a better penetration and the significant coherence gain on damaged forest areas. The percentage of damaged forest estimated by PALSAR decreased from 20.66% to 17.34% during 2009–2010, implying an approximate 3% recovery rate of forests in the earthquake OPEN ACCESS Remote Sens. 2014, 6 6284 impacted areas. This study proves that long-wavelength SAR interferometry is promising for rapid assessment of disaster-induced deforestation, particularly in regions where the optical acquisition is constrained.", "title": "" }, { "docid": "38c1f6741d99ffc8ab2ab17b5b91e477", "text": "This paper reviews recent advances in radar sensor design for low-power healthcare, indoor real-time positioning and other applications of IoT. Various radar front-end architectures and digital processing methods are proposed to improve the detection performance including detection accuracy, detection range and power consumption. While many of the reported designs were prototypes for concept verification, several integrated radar systems have been demonstrated with reliable measured results with demo systems. A performance comparison of latest radar chip designs has been provided to show their features of different architectures. With great development of IoT, short-range low-power radar sensors for healthcare and indoor positioning applications will attract more and more research interests in the near future.", "title": "" }, { "docid": "cdfcc894d32c9a6a3a076d3e978d400f", "text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.", "title": "" }, { "docid": "1c56fb7d4c5998c6bfab1cb35fe21681", "text": "With the growth of digital music, the development of music recommendation is helpful for users. The existing recommendation approaches are based on the users' preference on music. However, sometimes, recommending music according to the emotion is needed. In this paper, we propose a novel model for emotion-based music recommendation, which is based on the association discovery from film music. We investigated the music feature extraction and modified the affinity graph for association discovery between emotions and music features. Experimental result shows that the proposed approach achieves 85% accuracy in average.", "title": "" }, { "docid": "edf8d1bb84c0845dddad417a939e343b", "text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.", "title": "" }, { "docid": "2ee8910adbdff2111d64b9a06242050f", "text": "Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.", "title": "" }, { "docid": "04bc7757006176cd1307874d19b11dc6", "text": "AIMS\nCompare vaginal resting pressure (VRP), pelvic floor muscle (PFM) strength, and endurance in women with and without diastasis recti abdominis at gestational week 21 and at 6 weeks, 6 months, and 12 months postpartum. Furthermore, to compare prevalence of urinary incontinence (UI) and pelvic organ prolapse (POP) in the two groups at the same assessment points.\n\n\nMETHODS\nThis is a prospective cohort study following 300 nulliparous pregnant women giving birth at a public university hospital. VRP, PFM strength, and endurance were measured with vaginal manometry. ICIQ-UI-SF questionnaire and POP-Q were used to assess UI and POP. Diastasis recti abdominis was diagnosed with palpation of  ≥2 fingerbreadths 4.5 cm above, at, or 4.5 cm below the umbilicus.\n\n\nRESULTS\nAt gestational week 21 women with diastasis recti abdominis had statistically significant greater VRP (mean difference 3.06 cm H2 O [95%CI: 0.70; 5.42]), PFM strength (mean difference 5.09 cm H2 O [95%CI: 0.76; 9.42]) and PFM muscle endurance (mean difference 47.08 cm H2 O sec [95%CI: 15.18; 78.99]) than women with no diastasis. There were no statistically significant differences between women with and without diastasis in any PFM variables at 6 weeks, 6 months, and 12 months postpartum. No significant difference was found in prevalence of UI in women with and without diastasis at any assessment points. Six weeks postpartum 15.9% of women without diastasis had POP versus 4.1% in the group with diastasis (P = 0.001).\n\n\nCONCLUSIONS\nWomen with diastasis were not more likely to have weaker PFM or more UI or POP. Neurourol. Urodynam. 36:716-721, 2017. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "2e6b034cbb73d91b70e3574a06140621", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.", "title": "" }, { "docid": "a48622ff46323acf1c40345d3e61b636", "text": "In this paper we present a novel dataset for a critical aspect of autonomous driving, the joint attention that must occur between drivers and of pedestrians, cyclists or other drivers. This dataset is produced with the intention of demonstrating the behavioral variability of traffic participants. We also show how visual complexity of the behaviors and scene understanding is affected by various factors such as different weather conditions, geographical locations, traffic and demographics of the people involved. The ground truth data conveys information regarding the location of participants (bounding boxes), the physical conditions (e.g. lighting and speed) and the behavior of the parties involved.", "title": "" }, { "docid": "272be5fede7ede10ebfd368cabcd437b", "text": "Penetration testing is widely used to help ensure the security of web applications. Using penetration testing, testers discover vulnerabilities by simulating attacks on a target web application. To do this efficiently, testers rely on automated techniques that gather input vector information about the target web application and analyze the application’s responses to determine whether an attack was successful. Techniques for performing these steps are often incomplete, which can leave parts of the web application untested and vulnerabilities undiscovered. This paper proposes a new approach to penetration testing that addresses the limitations of current techniques. The approach incorporates two recently developed analysis techniques to improve input vector identification and detect when attacks have been successful against a web application. This paper compares the proposed approach against two popular penetration testing tools for a suite of web applications with known and unknown vulnerabilities. The evaluation results show that the proposed approach performs a more thorough penetration testing and leads to the discovery of more vulnerabilities than both the tools. Copyright q 2011 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "f1ebd840092228e48a3ab996287e7afd", "text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.", "title": "" }, { "docid": "72c6a7a2d64c266a7555b373b21dcba0", "text": "Antivirus companies, mobile application marketplaces, and the security research community, employ techniques based on dynamic code analysis to detect and analyze mobile malware. In this paper, we present a broad range of anti-analysis techniques that malware can employ to evade dynamic analysis in emulated Android environments. Our detection heuristics span three different categories based on (i) static properties, (ii) dynamic sensor information, and (iii) VM-related intricacies of the Android Emulator. To assess the effectiveness of our techniques, we incorporated them in real malware samples and submitted them to publicly available Android dynamic analysis systems, with alarming results. We found all tools and services to be vulnerable to most of our evasion techniques. Even trivial techniques, such as checking the value of the IMEI, are enough to evade some of the existing dynamic analysis frameworks. We propose possible countermeasures to improve the resistance of current dynamic analysis tools against evasion attempts.", "title": "" }, { "docid": "0681860d1be33f7d50c19398ca786582", "text": "Online social networks are increasingly being recognized as an important source of information influencing the adoption and use of products and services. Viral marketing—the tactic of creating a process where interested people can market to each other—is therefore emerging as an important means to spread-the-word and stimulate the trial, adoption, and use of products and services. Consider the case of Hotmail, one of the earliest firms to tap the potential of viral marketing. Based predominantly on publicity from word-of-mouse [4], the Web-based email service provider garnered one million registered subscribers in its first six months, hit two million subscribers two months later, and passed the eleven million mark in eighteen months [7]. Wired magazine put this growth in perspective in its December 1998 issue: “The Hotmail user base grew faster than [that of ] any media company in history—faster than CNN, faster than AOL, even faster than Seinfeld’s audience. By mid-2000, Hotmail had over 66 million users with 270,000 new accounts being established each day.” While the potential of viral marketing to efficiently reach out to a broad set of potential users is attracting considerable attention, the value of this approach is also being questioned [5]. There needs to be a greater understanding of the contexts in which this strategy works and the characteristics of products and services for which it is most effective. This is particularly important because the inappropriate use of viral marketing can be counterproductive by creating unfavorable attitudes towards products. Work examining this phenomenon currently provides either descriptive accounts of particular initiatives [8] or advice based on anecdotal evidence [2]. What is missing is an analysis of viral marketing that highlights systematic patterns in the nature of knowledge-sharing and persuasion by influencers and responses by recipients in online social networks. To this end, we propose an organizing framework for viral marketing that draws on prior theory and highlights different behavioral mechanisms underlying knowledge-sharing, influence, and compliance in online social networks. Though the framework is descrip-", "title": "" }, { "docid": "ff8cc7166b887990daa6ef355695e54f", "text": "The knowledge-based theory of the firm suggests that knowledge is the organizational asset that enables sustainable competitive advantage in hypercompetitive environments. The emphasis on knowledge in today’s organizations is based on the assumption that barriers to the transfer and replication of knowledge endow it with strategic importance. Many organizations are developing information systems designed specifically to facilitate the sharing and integration of knowledge. Such systems are referred to as Knowledge Management System (KMS). Because KMS are just beginning to appear in organizations, little research and field data exists to guide the development and implementation of such systems or to guide expectations of the potential benefits of such systems. This study provides an analysis of current practices and outcomes of KMS and the nature of KMS as they are evolving in fifty organizations. The findings suggest that interest in KMS across a variety of industries is very high, the technological foundations are varied, and the major", "title": "" } ]
scidocsrr
ba1bcbdf8577733f2e4a213495da254f
Sentiment Analysis with Incremental Human-in-the-Loop Learning and Lexical Resource Customization
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "dadd12e17ce1772f48eaae29453bc610", "text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st", "title": "" }, { "docid": "3c73a3a8783dcc20274ce36e60d6eb35", "text": "Recent years have witnessed the explosive growth of online social media. Weibo, a Twitter-like online social network in China, has attracted more than 300 million users in less than three years, with more than 1000 tweets generated in every second. These tweets not only convey the factual information, but also reflect the emotional states of the authors, which are very important for understanding user behaviors. However, a tweet in Weibo is extremely short and the words it contains evolve extraordinarily fast. Moreover, the Chinese corpus of sentiments is still very small, which prevents the conventional keyword-based methods from being used. In light of this, we build a system called MoodLens, which to our best knowledge is the first system for sentiment analysis of Chinese tweets in Weibo. In MoodLens, 95 emoticons are mapped into four categories of sentiments, i.e. angry, disgusting, joyful, and sad, which serve as the class labels of tweets. We then collect over 3.5 million labeled tweets as the corpus and train a fast Naive Bayes classifier, with an empirical precision of 64.3%. MoodLens also implements an incremental learning method to tackle the problem of the sentiment shift and the generation of new words. Using MoodLens for real-time tweets obtained from Weibo, several interesting temporal and spatial patterns are observed. Also, sentiment variations are well captured by MoodLens to effectively detect abnormal events in China. Finally, by using the highly efficient Naive Bayes classifier, MoodLens is capable of online real-time sentiment monitoring. The demo of MoodLens can be found at http://goo.gl/8DQ65.", "title": "" } ]
[ { "docid": "7247eb6b90d23e2421c0d2500359d247", "text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.", "title": "" }, { "docid": "38ce3298384bf60ef57adcd3f0285b85", "text": "Data-driven approaches are becoming dominant problem-solving techniques in many areas of research and industry. Unfortunately, current technologies do not make such techniques easy to use for application experts who are not fluent in machine learning nor for machine learning experts who aim at testing ideas on real-world data and need to evaluate those as a part of an end-to-end system. We review key efforts made by various AI communities to provide languages for highlevel abstractions over learning and reasoning techniques needed for designing complex AI systems. We classify the existing frameworks based on the type of techniques as well as the data and knowledge representations they use, provide a comparative study of the way they address the challenges of programming real-world applications, and highlight some shortcomings and future directions.", "title": "" }, { "docid": "ea629f7c3a96712f07addf013432d9aa", "text": "Accurate estimation of the State of Charge (SOC) of the battery is one of the key problems to the battery management system. The SOC should be obtained indirectly according to some algorithms under a mathematical model, along with some measurable quantities. A Sigma Point Kalman Filter based battery model parameters estimation method is proposed. The parameters can be estimated accurately while efficiently with the proposed method. Compared to the classical least squares method, the proposed method consumes much less memory and calculation time, which makes it suitable for embedded applications.", "title": "" }, { "docid": "db965d5b7eb8106d1d1b5b934be69ad1", "text": "In this paper, the authors aim to present the modelling, simulation and experimentation of an electromagnetic actuator used in the construction of vacuum contactors. In order to validate the numerical results obtained by computer simulation was performed and an experimental model. The role of the electromagnetic actuator is to achieve movement of the movable electrical contacts of the contactor with certain speeds and precise trajectories. The advantages of the electromagnetic actuator presented in this paper are the following: it is able to perform a large number of mechanical cycles (at least than one million), with high frequency, with high holding force in the closed position, through reducing air gaps and technology leakage flux, low power consumption, enrolling in a compact volume, reliable and low cost manufacturing. Using finite element simulation software, it was modelated and optimized an electromagnetic actuator with high performance, reliable, inexpensive that can be easily used in the construction of the Vacuum Contactors.", "title": "" }, { "docid": "e601c68a6118139c1183ba4abd012183", "text": "Robert M. Golub, MD, Editor The JAMA Patient Page is a public service of JAMA. The information and recommendations appearing on this page are appropriate in most instances, but they are not a substitute for medical diagnosis. For specific information concerning your personal medical condition, JAMA suggests that you consult your physician. This page may be photocopied noncommercially by physicians and other health care professionals to share with patients. To purchase bulk reprints, call 312/464-0776. C H IL D H E A TH The Journal of the American Medical Association", "title": "" }, { "docid": "c2891abf8297b5dcf0e21dfa9779a017", "text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.", "title": "" }, { "docid": "6e8b6f8d0d69d7fcdec560a536c5cd57", "text": "Networks have become multipath: mobile devices have multiple radio interfaces, datacenters have redundant paths and multihoming is the norm for big server farms. Meanwhile, TCP is still only single-path. Is it possible to extend TCP to enable it to support multiple paths for current applications on today’s Internet? The answer is positive. We carefully review the constraints—partly due to various types of middleboxes— that influenced the design of Multipath TCP and show how we handled them to achieve its deployability goals. We report our experience in implementing Multipath TCP in the Linux kernel and we evaluate its performance. Our measurements focus on the algorithms needed to efficiently use paths with different characteristics, notably send and receive buffer tuning and segment reordering. We also compare the performance of our implementation with regular TCP on web servers. Finally, we discuss the lessons learned from designing MPTCP.", "title": "" }, { "docid": "927f2c68d709c7418ad76fd9d81b18c4", "text": "With the growing deployment of host and network intrusion detection systems, managing reports from these systems becomes critically important. We present a probabilistic approach to alert correlation, extending ideas from multisensor data fusion. Features used for alert correlation are based on alert content that anticipates evolving IETF standards. The probabilistic approach provides a unified mathematical framework for correlating alerts that match closely but not perfectly, where the minimum degree of match required to fuse alerts is controlled by a single configurable parameter. Only features in common are considered in the fusion algorithm. For each feature we define an appropriate similarity function. The overall similarity is weighted by a specifiable expectation of similarity. In addition, a minimum similarity may be specified for some or all features. Features in this set must match at least as well as the minimum similarity specification in order to combine alerts, regardless of the goodness of match on the feature set as a whole. Our approach correlates attacks over time, correlates reports from heterogeneous sensors, and correlates multiple attack steps.", "title": "" }, { "docid": "4afba67277f3c64231e6783a58804ec6", "text": "Zu den lokalen Langzeitschäden der anti-VEGF-Therapie ist die Datenlage unübersichtlich. Ziel der vorliegenden Übersicht ist es deshalb, die pathophysiologischen Grundlagen für Entwicklung und Fortschreiten der Makula-Atrophie (MA) und des möglichen Einflusses einer anti-VEGF-Therapie auf den Verlauf plausibel zu machen. Die Übersicht basiert auf einer Literaturrecherche in PubMed mit den Schlüsselwörtern „wet AMD“ und „macular atrophy“ (151 Treffer). Unter den seit Anfang 2013 erschienenen Publikationen (n = 90) wurden diejenigen ausgeschieden, die auf Diagnostik und Verlauf, aber nicht Therapie ausgerichtet waren. Unter dem Begriff MA wird hier die Atrophie des funktionell relevanten Komplexes von Photorezeptoren, retinalem Pigmentepithel (RPE), Bruch-Membran und Choriokapillaris verstanden. Experimentell führt eine primäre, vollständige VEGF-Suppression zu erheblichen Veränderungen in der Choriokapillaris, eine inkomplette VEGF-Suppression hingegen zum schleichenden Untergang von Ganglienzellen und Photorezeptoren. Klinisch sind allerdings oft bereits vor Therapiebeginn degenerative Veränderungen an RPE und Bruch-Membran vorhanden. Folglich ist die MA Folge der fortschreitenden neurodegenerativen Grunderkrankung zu verstehen, die vermutlich durch die anti-VEGF-Therapie beschleunigt wird. Unter Ranibizumab ist ein rascheres Fortschreiten zu erwarten als unter Bevacizumab sowie unter monatlicher Therapie rascher als unter PRN-Therapie (lat. pro renata, dt. nach Bedarf). Trotz der dadurch induzierten MA ist die retinale Funktion unter konsequenter Therapie besser, sodass die Beschleunigung der Progression unter anti-VEGF-Therapie in den ersten 5 Jahren nur bei primär fortgeschrittener MA Bedeutung gewinnt. Trotz der Zweifel an der langfristigen Sicherheit der anti-VEGF-Therapie ist aus Sicht des Autors eine konsequente Therapie zur Erhaltung der Sehfunktion verhältnismäßig. Dabei lässt sich der therapieinduzierte Schaden kaum von dem natürlichen Fortschreiten der AMD und der biologischen Situation der Patienten trennen. Current understanding of the mechanisms that underlie the long-term consequences of anti-VEGF therapy in wet, age-related macular degeneration (AMD) is poor. Here, the impact of this treatment on the development of macular atrophy (MA) is discussed based on our current pathophysiological understanding. This review is based on a PubMed literature survey using the MeSH terms “wet AMD” and “macular atrophy” (151 hits) and limited to publications since 2013 (n = 90). Publications focussing on diagnostics and clinical course not in the context of therapy were excluded. Macular atrophy is defined herein as atrophy affecting the functionally relevant complex of photoreceptors, retinal pigmented epithelium (RPE), Bruch’s membrane and choriocapillaris. Experimentally, a primary complete suppression of local VEGF leads to evident changes in the choriocapillaris, whereas its incomplete suppression exacerbates cell death of RPE and photoreceptors. Since pre-existing atrophic changes are already present at diagnosis, the role of anti-VEGF treatment cannot be separated from the spontaneous progression of AMD. The progression of MA appears to be faster under ranibizumab than bevacizumab, and likewise on a monthly rather than as-needed basis. Although MA progresses more rapidly under consequent therapy, visual function remains better. Hence, a functionally relevant progression of atrophy during the first five years of treatment would only be expected in pre-existing advanced MA. Despite doubts regarding the long-term safety of anti-VEGF therapy, it is the author’s view that this is the only option to stabilise visual function. The impact of therapy-induced damage on the spontaneous progression of AMD and the biological status of the aging individual cannot be unequivocally assessed.", "title": "" }, { "docid": "4a29051479ac4b3ad7e7cd84540dbdb6", "text": "A compact, shared-aperture antenna (SAA) configuration consisting of various planar antennas embedded into a single footprint is presented in this article. An L-probefed, suspended-plate, horizontally polarized antenna operating in an 900-MHz band; an aperture-coupled, vertically polarized, microstrip antenna operating at 4.2-GHz; a 2 &#x000D7; 2 microstrip patch array operating at the X band; a low-side-lobe level (SLL), corporate-fed, 8 &#x000D7; 4 microstrip planar array for synthetic aperture radar (SAR) in the X band; and a printed, single-arm, circularly polarized, tilted-beam spiral antenna operating at the C band are integrated into a single aperture for simultaneous operation. This antenna system could find potential application in many airborne and unmanned aircraft vehicle (UAV) technologies. While the design of these antennas is not that critical, their optimal placement in a compact configuration for simultaneous operation with minimal interference poses a significant challenge to the designer. The placement optimization was arrived at based on extensive numerical fullwave optimizations.", "title": "" }, { "docid": "655ebc05eafbca9b9079224d1013e8fc", "text": "This paper examines the degree of stability in the structure of the corporate elite network in the US during the 1980s and 1990s. Several studies have documented that board-toboard ties serve as a mechanism for the diffusion of corporate practices, strategies, and structures; thus, the overall structure of the network can shape the nature and rate of aggregate corporate change. But upheavals in the nature of corporate governance and nearly complete turnover in the firms and directors at the core of the network since 1980 prompt a reassessment of the network’s topography.We find that the aggregate connectivity of the network is remarkably stable and appears to be an intrinsic property of the interlock network, resilient to major changes in corporate governance.After a brief review of elite studies in the US, we take advantage of the recent advances in the theoretical and methodological tools for analyzing network structures to examine the network properties of the directors and companies in 1982, 1990, and 1999. We use concepts from smallworld analysis to explain our finding that the structure of the corporate elite is resilient to macro and micro changes affecting corporate governance.", "title": "" }, { "docid": "1979fa5a3384477602c0e81ba62199da", "text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.", "title": "" }, { "docid": "55fcec6d008f4abf377fc55b5b73f01a", "text": "This work exploits the benefits of adaptive downtilt and vertical sectorization schemes for Long Term Evolution Advanced (LTE-A) networks equipped with active antenna systems (AAS). We highlight how the additional control in the elevation domain (via AAS) enables use of adaptive downtilt and vertical sectorization techniques, thereby improving system spectrum efficiency. Our results, based on a full 3 dimensional (3D) channel, demonstrate that adaptive downtilt achieves up to 11% cell edge and 5% cell average spectrum efficiency gains when compared to a baseline system utilizing fixed downtilt, without the need for complex coordination among cells. In addition, vertical sectorization, especially high-order vertical sectorization utilizing multiple vertical beams, which increases spatial reuse of time and frequency resources, is shown to provide even higher performance gains.", "title": "" }, { "docid": "79844bc05388cc1436bb5388e88f6daa", "text": "The growing number of Unmanned Aerial Vehicles (UAVs) is considerable in the last decades. Many flight test scenarios, including single and multi-vehicle formation flights, are demonstrated using different control algorithms with different test platforms. In this paper, we present a brief literature review on the development and key issues of current researches in the field of Fault-Tolerant Control (FTC) applied to UAVs. It consists of various intelligent or hierarchical control architectures for a single vehicle or a group of UAVs in order to provide potential solutions for tolerance to the faults, failures or damages in relevant to UAV components during flight. Among various UAV test-bed structures, a sample of every class of UAVs, including single-rotor, quadrotor, and fixed-wing types, are selected and briefly illustrated. Also, a short description of terms, definitions, and classifications of fault-tolerant control systems (FTCS) is presented before the main contents of review.", "title": "" }, { "docid": "60c42e3d0d0e82200a80b469a61f1921", "text": "BACKGROUND\nDespite using sterile technique for catheter insertion, closed drainage systems, and structured daily care plans, catheter-associated urinary tract infections (CAUTIs) regularly occur in acute care hospitals. We believe that meaningful reduction in CAUTI rates can only be achieved by reducing urinary catheter use.\n\n\nMETHODS\nWe used an interventional study of a hospital-wide, multidisciplinary program to reduce urinary catheter use and CAUTIs on all patient care units in a 300-bed, community teaching hospital in Connecticut. Our primary focus was the implementation of a nurse-directed urinary catheter removal protocol. This protocol was linked to the physician's catheter insertion order. Three additional elements included physician documentation of catheter insertion criteria, a device-specific charting module added to physician electronic progress notes, and biweekly unit-specific feedback on catheter use rates and CAUTI rates in a multidisciplinary forum.\n\n\nRESULTS\nWe achieved a 50% hospital-wide reduction in catheter use and a 70% reduction in CAUTIs over a 36-month period, although there was wide variation from unit to unit in catheter reduction efforts, ranging from 4% (maternity) to 74% (telemetry).\n\n\nCONCLUSION\nUrinary catheter use, and ultimately CAUTI rates, can be effectively reduced by the diligent application of relatively few evidence-based interventions. Aggressive implementation of the nurse-directed catheter removal protocol was associated with lower catheter use rates and reduced infection rates.", "title": "" }, { "docid": "df2be33740334d9e9db5d9f2911153ed", "text": "Mobile devices such as smartphones and tablets offer great new possibilities for the creation of 3D games and virtual reality environments. However, interaction with objects in these virtual worlds is often difficult -- for example due to the devices' small form factor. In this paper, we define different 3D visualization concepts and evaluate related interactions such as navigation and selection of objects. Detailed experiments with a smartphone and a tablet illustrate the advantages and disadvantages of the various 3D visualization concepts. Our results provide new insight with respect to interaction and highlight important aspects for the design of interactive virtual environments on mobile devices and related applications -- especially for mobile 3D gaming.", "title": "" }, { "docid": "7f20eba09cddb9d980b6475aa089463f", "text": "This technical note describes a new baseline for the Natural Questions (Kwiatkowski et al., 2019). Our model is based on BERT (Devlin et al., 2018) and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard†. Code, preprocessed data and pretrained model are available‡.", "title": "" }, { "docid": "01651546f9fb6c984e84cfd2d1702b8e", "text": "There is increasing evidence for the involvement of glutamate-mediated neurotoxicity in the pathogenesis of Alzheimer's disease (AD). We suggest that glutamate receptors of the N-methyl-D-aspartate (NMDA) type are overactivated in a tonic rather than a phasic manner in this disorder. This continuous mild activation may lead to neuronal damage and impairment of synaptic plasticity (learning). It is likely that under such conditions Mg(2+) ions, which block NMDA receptors under normal resting conditions, can no longer do so. We found that overactivation of NMDA receptors using a direct agonist or a decrease in Mg(2+) concentration produced deficits in synaptic plasticity (in vivo: passive avoidance test and/or in vitro: LTP in the CA1 region). In both cases, memantine-an uncompetitive NMDA receptor antagonists with features of an 'improved' Mg(2+) (voltage-dependency, kinetics, affinity)-attenuated this deficit. Synaptic plasticity was restored by therapeutically-relevant concentrations of memantine (1 microM). Moreover, doses leading to similar brain/serum levels provided neuroprotection in animal models relevant for neurodegeneration in AD such as neurotoxicity produced by inflammation in the NBM or beta-amyloid injection to the hippocampus. As such, if overactivation of NMDA receptors is present in AD, memantine would be expected to improve both symptoms (cognition) and to slow down disease progression because it takes over the physiological function of magnesium.", "title": "" }, { "docid": "0fd37a459c95b20e3d80021da1bb281d", "text": "Social media data are increasingly used as the source of research in a variety of domains. A typical example is urban analytics, which aims at solving urban problems by analyzing data from different sources including social media. The potential value of social media data in tourism studies, which is one of the key topics in urban research, however has been much less investigated. This paper seeks to understand the relationship between social media dynamics and the visiting patterns of visitors to touristic locations in real-world cases. By conducting a comparative study, we demonstrate how social media characterizes touristic locations differently from other data sources. Our study further shows that social media data can provide real-time insights of tourists’ visiting patterns in big events, thus contributing to the understanding of social media data utility in tourism studies.", "title": "" }, { "docid": "ed2a67dd24a67b410541a246a2004ecd", "text": "High energy consumption of cloud data centers is a matter of great concern. Dynamic consolidation of Virtual Machines (VMs) presents a significant opportunity to save energy in data centers. A VM consolidation approach uses live migration of VMs so that some of the under-loaded Physical Machines (PMs) can be switched-off or put into a low-power mode. On the other hand, achieving the desired level of Quality of Service (QoS) between cloud providers and their users is critical. Therefore, the main challenge is to reduce energy consumption of data centers while satisfying QoS requirements. In this paper, we present a distributed system architecture to perform dynamic VM consolidation to reduce energy consumption of cloud data centers while maintaining the desired QoS. Since the VM consolidation problem is strictly NP-hard, we use an online optimization metaheuristic algorithm called Ant Colony System (ACS). The proposed ACS-based VM Consolidation (ACS-VMC) approach finds a near-optimal solution based on a specified objective function. Experimental results on real workload traces show that ACS-VMC reduces energy consumption while maintaining the required performance levels in a cloud data center. It outperforms existing VM consolidation approaches in terms of energy consumption, number of VM migrations, and QoS requirements concerning performance.", "title": "" } ]
scidocsrr
7c0761e395fe2550af5a350fd12fc33a
Adaptive Neural Networks for Fast Test-Time Prediction
[ { "docid": "d30aa274e73c267bcb7e5c78cd770d7c", "text": "Porting state of the art deep learning algorithms to resource constrained compute platforms (e.g. VR, AR, wearables) is extremely challenging. We propose a fast, compact, and accurate model for convolutional neural networks that enables efficient learning and inference. We introduce LCNN, a lookup-based convolutional neural network that encodes convolutions by few lookups to a dictionary that is trained to cover the space of weights in CNNs. Training LCNN involves jointly learning a dictionary and a small set of linear combinations. The size of the dictionary naturally traces a spectrum of trade-offs between efficiency and accuracy. Our experimental results on ImageNet challenge show that LCNN can offer 3.2x speedup while achieving 55.1% top-1 accuracy using AlexNet architecture. Our fastest LCNN offers 37.6x speed up over AlexNet while maintaining 44.3% top-1 accuracy. LCNN not only offers dramatic speed ups at inference, but it also enables efficient training. In this paper, we show the benefits of LCNN in few-shot learning and few-iteration learning, two crucial aspects of on-device training of deep learning models.", "title": "" }, { "docid": "26dac00bc328dc9c8065ff105d1f8233", "text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.", "title": "" }, { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "34b2fed38744920300f2cbf8cc75c021", "text": "In this paper we develop a framework for a sequential decision making under budget constraints for multi-class classification. In many classification systems, such as medical diagnosis and homeland security, sequential decisions are often warranted. For each instance, a sensor is first chosen for acquiring measurements and then based on the available information one decides (rejects) to seek more measurements from a new sensor/modality or to terminate by classifying the example based on the available information. Different sensors have varying costs for acquisition, and these costs account for delay, throughput or monetary value. Consequently, we seek methods for maximizing performance of the system subject to budget constraints. We formulate a multi-stage multi-class empirical risk objective and learn sequential decision functions from training data. We show that reject decision at each stage can be posed as supervised binary classification. We derive bounds for the VC dimension of the multi-stage system to quantify the generalization error. We compare our approach to alternative strategies on several multi-class real world datasets.", "title": "" }, { "docid": "d00957d93af7b2551073ba84b6c0f2a6", "text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn", "title": "" }, { "docid": "0c12fd61acd9e02be85b97de0cc79801", "text": "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.", "title": "" } ]
[ { "docid": "54be2ce42fb882da9bca1219bec3916f", "text": "BACKGROUND\nThis review summarizes and evaluates clinical experience with citalopram, the latest selective serotonin reuptake inhibitor (SSRI) to be approved for the treatment of depression in the United States.\n\n\nDATA SOURCES\nPublished reports of randomized, double-blind, controlled clinical studies of citalopram were retrieved using a MEDLINE literature search. Search terms included citalopram, SSRI, TCA (tricylic antidepressant), depression, and clinical. For each study, data on antidepressant efficacy and adverse events were evaluated. Pharmacokinetic studies and case reports were reviewed to supplement the evaluation of citalopram's safety and tolerability. Data presented at major medical conferences and published in abstract form also were reviewed.\n\n\nSTUDY FINDINGS\nThirty randomized, double-blind, controlled studies of the antidepressant efficacy of citalopram were located and reviewed. In 11 studies, citalopram was compared with placebo (1 of these studies also included comparison with another SSRI). In 4 additional studies, the efficacy of citalopram in preventing depression relapse or recurrence was investigated. In another 11 studies (including 1 meta-analysis of published and unpublished trials), citalopram was compared with tricyclic and tetracyclic antidepressants. Finally, results are available from 4 studies in which citalopram was compared with other SSRIs. A placebo-controlled study of citalopram for the treatment of panic disorder was reviewed for data on long-term adverse events.\n\n\nCONCLUSION\nData published over the last decade suggest that citalopram is (1) superior to placebo in the treatment of depression, (2) has efficacy similar to that of the tricyclic and tetracyclic antidepressants and to other SSRIs, and (3) is safe and well tolerated in the therapeutic dose range of 20 to 60 mg/day. Distinct from some other agents in its class, citalopram exhibits linear pharmacokinetics and minimal drug interaction potential. These features make citalopram an attractive agent for the treatment of depression, especially among the elderly and patients with comorbid illness.", "title": "" }, { "docid": "d4ed91ed764fc20c599f311b1c7957a0", "text": "We consider the problem of designing mechanisms with the incentive property that no coalition of agents can engage in a collusive strategy that results in an increase in the combined utility of the coalition. For single parameter agents, we give a characterization that essentially restricts such mechanisms to those that post a \"take it or leave it\" price to for each agent in advance. We then consider relaxing the incentive property to only hold with high probability. In this relaxed model, we are able to design approximate profit maximizing auctions and approximately efficient auctions. We generalized these results to give a methodology for designing collusion resistant mechanisms for single parameter agents. In addition, we give several results for a weaker incentive property from the literature known as group strategyproofness.", "title": "" }, { "docid": "0cafe66b71b0a7fca2b682866b0c4848", "text": "Using ultra-wideband (UWB) wireless sensors placed on a person to continuously monitor health information is a promising new application. However, there are currently no detailed models describing the UWB radio channel around the human body making it difficult to design a suitable communication system. To address this problem, we have measured radio propagation around the body in a typical indoor environment and incorporated these results into a simple model. We then implemented this model on a computer and compared experimental data with the simulation results. This paper proposes a simple statistical channel model and a practical implementation useful for evaluating UWB body area communication systems.", "title": "" }, { "docid": "41c165eec3e201156217ee7bf91867b2", "text": "This position paper advocates a communicationsinspired approach to the design of machine learning systems on energy-constrained embedded ‘always-on’ platforms. The communicationsinspired approach has two versions 1) a deterministic version where existing low-power communication IC design methods are repurposed, and 2) a stochastic version referred to as Shannon-inspired statistical information processing employing information-based metrics, statistical error compensation (SEC), and retraining-based methods to implement ML systems on stochastic circuit/device fabrics operating at the limits of energy-efficiency. The communications-inspired approach has the potential to fully leverage the opportunities afforded by ML algorithms and applications in order to address the challenges inherent in their deployment on energy-constrained platforms.", "title": "" }, { "docid": "b715631367001fb60b4aca9607257923", "text": "This paper describes a new predictive algorithm that can be used for programming large arrays of analog computational memory elements within 0.2% of accuracy for 3.5 decades of currents. The average number of pulses required are 7-8 (20 mus each). This algorithm uses hot-electron injection for accurate programming and Fowler-Nordheim tunneling for global erase. This algorithm has been tested for programming 1024times16 and 96times16 floating-gate arrays in 0.25 mum and 0.5 mum n-well CMOS processes, respectively", "title": "" }, { "docid": "d674b67d3b8e75f48a128ca5cfc8f2d6", "text": "Voltage feedback is frequently used in class-D switching audio power amplifiers. This paper discusses the design and implementation of a low-cost filterless class-D, unipolar pulse-width modulation switching audio amplifier having a multi-loop voltage feedback scheme. Classical frequency-compensation techniques are used to design and stabilize the three voltage feedback loops implemented in this application. This design method proves to be a cost-effective solution for designing high-fidelity (hi-fi) audio amplifiers. The cost is reduced because no output filter is used, the required switching frequency is half of the one needed if bipolar PWM was used, and no current sensor is needed for feedback purposes. The output impedance is extremely low due to the reduction of the successive voltage loops, making the amplifier less load dependent. Simulation results show that a total harmonic distortion (THD) of 0.005% can be achieved using this topology, as well as a flat frequency response, free of phase distortion in the audio band. Experimental results show the feasibility of this control scheme, since a THD of 0.05% was achieved with a laboratory prototyped amplifier. A comparison of the performance of this audio amplifier with that of some commercial class-D audio amplifiers, reveals that our design can seriously compete with some of the ICs leading the market at a lower cost.", "title": "" }, { "docid": "7c1fafba892be56bb81a59df996bd95f", "text": "Cowper's gland syringocele is an uncommon, underdiagnosed cystic dilatation of Cowper's gland ducts showing various radiological patterns. Herein we report a rare case of giant Cowper's gland syringocele in an adult male patient, with description of MRI findings and management outcome.", "title": "" }, { "docid": "0cbd3587fe466a13847e94e29bb11524", "text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?", "title": "" }, { "docid": "faeed30248cf6e7d8ade9817acbb3b96", "text": "This paper brings out a unique FFT based Pulse detection approach for a Two channel Digital ESM (Electronic Support Measure) Receiver targeted for Airborne EW applications. The proposed approach uses a high speed ADC (1.5 GHz) and FPGA based architecture for sampling and Digital processing of the received signals. A high speed 256 point FFT engine is realised in FPGA to separate up to 4 overlapped pulses in 500 MHz instantaneous BW. Pulses with PW as low as 100 ns can be detected in presence of CW signals in the 750-1250 MHz input band with this approach. The hardware realised to verify the algorithm and the FPGA implementation for Pulse Detection Engine are discussed in the paper. The simulated and measured results for the Pulse Detection algorithm are presented. This pulse detection approach gives up to 45 dB single signal dynamic range.", "title": "" }, { "docid": "71e65d1ae7ff899467cc93b3858992b8", "text": "This paper describes a semi-automated process, framework and tools for harvesting, assessing, improving and maintaining high-quality linked-data. The framework, known as DaCura1, provides dataset curators, who may not be knowledge engineers, with tools to collect and curate evolving linked data datasets that maintain quality over time. The framework encompasses a novel process, workflow and architecture. A working implementation has been produced and applied firstly to the publication of an existing social-sciences dataset, then to the harvesting and curation of a related dataset from an unstructured data-source. The framework’s performance is evaluated using data quality measures that have been developed to measure existing published datasets. An analysis of the framework against these dimensions demonstrates that it addresses a broad range of real-world data quality concerns. Experimental results quantify the impact of the DaCura process and tools on data quality through an assessment framework and methodology which combines automated and human data quality controls. Improving Curated WebData Quality with Structured Harvesting and Assessment", "title": "" }, { "docid": "aee250663a05106c4c0fad9d0f72828c", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.", "title": "" }, { "docid": "96443baa768f6a5a270a92ba46164c42", "text": "Can the rapid stream of conscious experience be predicted from brain activity alone? Recently, spatial patterns of activity in visual cortex have been successfully used to predict feature-specific stimulus representations for both visible and invisible stimuli. However, because these studies examined only the prediction of static and unchanging perceptual states during extended periods of stimulation, it remains unclear whether activity in early visual cortex can also predict the rapidly and spontaneously changing stream of consciousness. Here, we used binocular rivalry to induce frequent spontaneous and stochastic changes in conscious experience without any corresponding changes in sensory stimulation, while measuring brain activity with fMRI. Using information that was present in the multivariate pattern of responses to stimulus features, we could accurately predict, and therefore track, participants' conscious experience from the fMRI signal alone while it underwent many spontaneous changes. Prediction in primary visual cortex primarily reflected eye-based signals, whereas prediction in higher areas reflected the color of the percept. Furthermore, accurate prediction during binocular rivalry could be established with signals recorded during stable monocular viewing, showing that prediction generalized across viewing conditions and did not require or rely on motor responses. It is therefore possible to predict the dynamically changing time course of subjective experience with only brain activity.", "title": "" }, { "docid": "d79252babce60e4353e2481feec57111", "text": "A modification of stacked spiral inductors increases the self-resonance frequency by 100% with no additional processing steps, yielding values of 5 to 266 nH and self-resonance frequencies of 11.2 to 0.5 GHz. Closed-form expressions predicting the self-resonance frequency with less than 5% error have also been developed. Stacked transformers are also introduced that achieve voltage gains of 1.8 to 3 at multigigahertz frequencies. The structures have been fabricated in standard digital CMOS technologies with four and five metal layers.", "title": "" }, { "docid": "fba2cce267a075c24a1378fd55de6113", "text": "This paper presents a novel mixed reality rehabilitation system used to help improve the reaching movements of people who have hemiparesis from stroke. The system provides real-time, multimodal, customizable, and adaptive feedback generated from the movement patterns of the subject's affected arm and torso during reaching to grasp. The feedback is provided via innovative visual and musical forms that present a stimulating, enriched environment in which to train the subjects and promote multimodal sensory-motor integration. A pilot study was conducted to test the system function, adaptation protocol and its feasibility for stroke rehabilitation. Three chronic stroke survivors underwent training using our system for six 75-min sessions over two weeks. After this relatively short time, all three subjects showed significant improvements in the movement parameters that were targeted during training. Improvements included faster and smoother reaches, increased joint coordination and reduced compensatory use of the torso and shoulder. The system was accepted by the subjects and shows promise as a useful tool for physical and occupational therapists to enhance stroke rehabilitation.", "title": "" }, { "docid": "b18bb896338bdfddfd0a3e0a0518e8fe", "text": "Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.", "title": "" }, { "docid": "a688f040f616faff3db13be4b1c052df", "text": "Intracellular fucoidanase was isolated from the marine bacterium, Formosa algae strain KMM 3553. The first appearance of fucoidan enzymatic hydrolysis products in a cell-free extract was detected after 4 h of bacterial growth, and maximal fucoidanase activity was observed after 12 h of growth. The fucoidanase displayed maximal activity in a wide range of pH values, from 6.5 to 9.1. The presence of Mg2+, Ca2+ and Ba2+ cations strongly activated the enzyme; however, Cu2+ and Zn2+ cations had inhibitory effects on the enzymatic activity. The enzymatic activity of fucoidanase was considerably reduced after prolonged (about 60 min) incubation of the enzyme solution at 45 °C. The fucoidanase catalyzed the hydrolysis of fucoidans from Fucus evanescens and Fucus vesiculosus, but not from Saccharina cichorioides. The fucoidanase also did not hydrolyze carrageenan. Desulfated fucoidan from F. evanescens was hydrolysed very weakly in contrast to deacetylated fucoidan, which was hydrolysed more actively compared to the native fucoidan from F. evanescens. Analysis of the structure of the enzymatic products showed that the marine bacteria, F. algae, synthesized an α-l-fucanase with an endo-type action that is specific for 1→4-bonds in a polysaccharide molecule built up of alternating three- and four-linked α-l-fucopyranose residues sulfated mainly at position 2.", "title": "" }, { "docid": "ec6e955f3f79ef1706fc6b9b16326370", "text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.", "title": "" }, { "docid": "6d55978aa80f177f6a859a55380ffed8", "text": "This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor. In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields.", "title": "" }, { "docid": "1687fb86e0e25fcea56985a413a3f422", "text": "Recently, generative adversarial networks U+0028 GANs U+0029 have become a research focus of artificial intelligence. Inspired by two-player zero-sum game, GANs comprise a generator and a discriminator, both trained under the adversarial learning idea. The goal of GANs is to estimate the potential distribution of real data samples and generate new samples from that distribution. Since their initiation, GANs have been widely studied due to their enormous prospect for applications, including image and vision computing, speech and language processing, etc. In this review paper, we summarize the state of the art of GANs and look into the future. Firstly, we survey GANs U+02BC proposal background, theoretic and implementation models, and application fields. Then, we discuss GANs U+02BC advantages and disadvantages, and their development trends. In particular, we investigate the relation between GANs and parallel intelligence, with the conclusion that GANs have a great potential in parallel systems research in terms of virtual-real interaction and integration. Clearly, GANs can provide substantial algorithmic support for parallel intelligence.", "title": "" }, { "docid": "0793d82c1246c777dce673d8f3146534", "text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.", "title": "" } ]
scidocsrr
9f1c6b82d6c806eef57637da68595da4
A Fluxgate Current Sensor With an Amphitheater Busbar
[ { "docid": "e29f4224c5d0f921304e54bd1555cb38", "text": "More and more sensitivity improvement is required for current sensors that are used in new area of applications, such as electric vehicle, smart meter, and electricity usage monitoring system. To correspond with the technical needs, a high precision magnetic current sensor module has been developed. The sensor module features an excellent linearity and a small magnetic hysteresis. In addition, it offers 2.5-4.5 V voltage output for 0-300 A positive input current and 0.5-2.5 V voltage output for 0-300 A negative input current under -40 °C-125 °C, VCC = 5 V condition.", "title": "" } ]
[ { "docid": "f15cb62cb81b71b063d503eb9f44d7c5", "text": "This study presents an improved krill herd (IKH) approach to solve global optimization problems. The main improvement pertains to the exchange of information between top krill during motion calculation process to generate better candidate solutions. Furthermore, the proposed IKH method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. This novel meta-heuristic approach can accelerate the global convergence speed while preserving the robustness of the basic KH algorithm. Besides, the detailed implementation procedure for the IKH method is described. Several standard benchmark functions are used to verify the efficiency of IKH. Based on the results, the performance of IKH is superior to or highly competitive with the standard KH and other robust population-based optimization methods. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "08a7621fe99afba5ec9a78c76192f43d", "text": "Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.", "title": "" }, { "docid": "b8095fb49846c89a74cc8c0f69891877", "text": "Attitudes held with strong moral conviction (moral mandates) were predicted to have different interpersonal consequences than strong but nonmoral attitudes. After controlling for indices of attitude strength, the authors explored the unique effect of moral conviction on the degree that people preferred greater social (Studies 1 and 2) and physical (Study 3) distance from attitudinally dissimilar others and the effects of moral conviction on group interaction and decision making in attitudinally homogeneous versus heterogeneous groups (Study 4). Results supported the moral mandate hypothesis: Stronger moral conviction led to (a) greater preferred social and physical distance from attitudinally dissimilar others, (b) intolerance of attitudinally dissimilar others in both intimate (e.g., friend) and distant relationships (e.g., owner of a store one frequents), (c) lower levels of good will and cooperativeness in attitudinally heterogeneous groups, and (d) a greater inability to generate procedural solutions to resolve disagreements.", "title": "" }, { "docid": "9dd75e407c25d46aa0eb303a948985b1", "text": "Being a corner stone of the New testament and Christian religion, the evangelical narration about Jesus Christ crucifixion had been drawing attention of many millions people, both Christians and representatives of other religions and convictions, almost for two thousand years.If in the last centuries the crucifixion was considered mainly from theological and historical positions, the XX century was marked by surge of medical and biological researches devoted to investigation of thanatogenesis of the crucifixion. However the careful analysis of the suggested concepts of death at the crucifixion shows that not all of them are well-founded. Moreover, some authors sometimes do not consider available historic facts.Not only the analysis of the original Greek text of the Gospel is absent in the published works but authors ignore the Gospel itself at times.", "title": "" }, { "docid": "8730b884da4444c9be6d8c13d7b983e1", "text": "The design and structure of a self-assembly modular robot (Sambot) are presented in this paper. Each module has its own autonomous mobility and can connect with other modules to form robotic structures with different manipulation abilities. Sambot has a versatile, robust, and flexible structure. The computing platform provided for each module is distributed and consists of a number of interlinked microcontrollers. The interaction and connectivity between different modules is achieved through infrared sensors and Zigbee wireless communication in discrete state and control area network bus communication in robotic configuration state. A new mechanical design is put forth to realize the autonomous motion and docking of Sambots. It is a challenge to integrate actuators, sensors, microprocessors, power units, and communication elements into a highly compact and flexible module with the overall size of 80 mm × 80 mm × 102 mm. The work describes represents a mature development in the area of self-assembly distributed robotics.", "title": "" }, { "docid": "d90467d05b4df62adc94b7c150013968", "text": "Bacterial flagella and type III secretion system (T3SS) are evolutionarily related molecular transport machineries. Flagella mediate bacterial motility; the T3SS delivers virulence effectors to block host defenses. The inflammasome is a cytosolic multi-protein complex that activates caspase-1. Active caspase-1 triggers interleukin-1β (IL-1β)/IL-18 maturation and macrophage pyroptotic death to mount an inflammatory response. Central to the inflammasome is a pattern recognition receptor that activates caspase-1 either directly or through an adapter protein. Studies in the past 10 years have established a NAIP-NLRC4 inflammasome, in which NAIPs are cytosolic receptors for bacterial flagellin and T3SS rod/needle proteins, while NLRC4 acts as an adapter for caspase-1 activation. Given the wide presence of flagella and the T3SS in bacteria, the NAIP-NLRC4 inflammasome plays a critical role in anti-bacteria defenses. Here, we review the discovery of the NAIP-NLRC4 inflammasome and further discuss recent advances related to its biochemical mechanism and biological function as well as its connection to human autoinflammatory disease.", "title": "" }, { "docid": "d3257b09b8646cf88d41e018d919a190", "text": "Blockchain innovation was initially presented as the innovation behind the Bitcoin decentralized virtual currency, yet there is the desire that its qualities of precise and irreversible information move in a decentralized P2P system could make different applications conceivable. Blockchain an apparently unassuming information structure, and a suite of related conventions, have as of late taken the universes of Finance and Technology by tempest through its earth shattering application in the present day crypto-currency Bitcoin, and all the more so due to the problematic advancements it guarantees. Keywords—blockchain, bitcoin, security, public ledger.", "title": "" }, { "docid": "76a084306ac94b516e7789a044dd8963", "text": "This article provided an overview of the field of microrobotics, including the distinct but related topics of micromanipulation and microrobots. While many interesting results have been shown to date, the greatest results in this field are yet to come.", "title": "" }, { "docid": "4eabc161187126a726a6b65f6fc6c685", "text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.", "title": "" }, { "docid": "dfee7f5f17ff6b0527823ae920b9977a", "text": "This paper introduces a Linux audio application that provides an integrated solution for making full 3-D Ambisonics recordings by using a tetrahedral microphone. Apart from the basic A to B format conversion it performs a number of auxiliary functions such as LF filtering, metering and monitoring, turning it into a complete Ambisonics recording processor. It also allows for calibration of an individual microphone unit based on measured impulse responses. A new JACK backend required to make use of a particular four-channel audio interface optimised for Ambisonic recording is also introduced.", "title": "" }, { "docid": "77579a39108209535de1af9494f205cc", "text": "Sentiment analysis aims to extract the sentiment polarity of given segment of text. Polarity resources that indicate the sentiment polarity of words are commonly used in different approaches. While English is the richest language in regard to having such resources, the majority of other languages, including Turkish, lack polarity resources. In this work we present the first comprehensive Turkish polarity resource, SentiTurkNet, where three polarity scores are assigned to each synset in the Turkish WordNet, indicating its positivity, negativity, and objectivity (neutrality) levels. Our method is general and applicable to other languages. Evaluation results for Turkish show that the polarity scores obtained through this method are more accurate compared to those obtained through direct translation (mapping) from SentiWordNet.", "title": "" }, { "docid": "4f8a233a8de165f2aeafbad9c93a767a", "text": "Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.", "title": "" }, { "docid": "aea4b65d1c30e80e7f60a52dbecc78f3", "text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.", "title": "" }, { "docid": "266479fe8367698967b7b46dbd767322", "text": "Vendor-managed inventory (VMI) is a supply-chain initiative where the supplier is authorized to manage inventories of agreed-upon stock-keeping units at retail locations. The benefits of VMI are well recognized by successful retail businesses such as Wal-Mart. In VMI, distortion of demand information (known as bullwhip effect) transferred from the downstream supply-chain member (e.g., retailer) to the upstream member (e.g., supplier) is minimized, stockout situations are less frequent, and inventory-carrying costs are reduced. Furthermore, a VMI supplier has the liberty of controlling the downstream resupply decisions rather than filling orders as they are placed. Thus, the approach offers a framework for synchronizing inventory and transportation decisions. In this paper, we present an analytical model for coordinating inventory and transportation decisions in VMI systems. Although the coordination of inventory and transportation has been addressed in the literature, our particular problem has not been explored previously. Specifically, we consider a vendor realizing a sequence of random demands from a group of retailers located in a given geographical region. Ideally, these demands should be shipped immediately. However, the vendor has the autonomy of holding small orders until an agreeable dispatch time with the expectation that an economical consolidated dispatch quantity accumulates. As a result, the actual inventory requirements at the vendor are partly dictated by the parameters of the shipment-release policy in use. We compute the optimum replenishment quantity and dispatch frequency simultaneously. We develop a renewaltheoretic model for the case of Poisson demands, and present analytical results. (Vendor-Managed Inventory; Freight Consolidation; Renewal Theory)", "title": "" }, { "docid": "5be55ce7d8f97689bf54028063ba63d7", "text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.", "title": "" }, { "docid": "b387476c4ff2b2b5ed92a23c7f065026", "text": "In this article, I review the diagnostic criteria for Gender Identity Disorder (GID) in children as they were formulated in the DSM-III, DSM-III-R, and DSM-IV. The article focuses on the cumulative evidence for diagnostic reliability and validity. It does not address the broader conceptual discussion regarding GID as \"disorder,\" as this issue is addressed in a companion article by Meyer-Bahlburg (2009). This article addresses criticisms of the GID criteria for children which, in my view, can be addressed by extant empirical data. Based in part on reanalysis of data, I conclude that the persistent desire to be of the other gender should, in contrast to DSM-IV, be a necessary symptom for the diagnosis. If anything, this would result in a tightening of the diagnostic criteria and may result in a better separation of children with GID from children who display marked gender variance, but without the desire to be of the other gender.", "title": "" }, { "docid": "90bf404069bd3dfff1e6b108dafffe4c", "text": "To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.", "title": "" }, { "docid": "bd5808b4df3a8dd745971a06de67f251", "text": "-In this paper we investigate the use of the area under the receiver operating characteristic (ROC) curve (AUC) as a performance measure for machine learning algorithms. As a case study we evaluate six machine learning algorithms (C4.5, Multiscale Classifier, Perceptron, Multi-layer Perceptron, k-Nearest Neighbours, and a Quadratic Discriminant Function) on six \"real world\" medical diagnostics data sets. We compare and discuss the use of AUC to the more conventional overall accuracy and find that AUC exhibits a number of desirable properties when compared to overall accuracy: increased sensitivity in Analysis of Variance (ANOVA) tests; a standard error that decreased as both AUC and the number of test samples increased; decision threshold independent; and it is invafiant to a priori class probabilities. The paper concludes with the recommendation that AUC be used in preference to overall accuracy for \"single number\" evaluation of machine learning algorithms. © 1997 Pattern Recognition Society. Published by Elsevier Science Ltd. The ROC curve Cross-validation The area under the ROC curve (AUC) Wilcoxon statistic Standard error Accuracy measures", "title": "" }, { "docid": "af254a16b14a3880c9b8fe5b13f1a695", "text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.", "title": "" } ]
scidocsrr
1900470bf0ffb2e74ad580ee41cabe3a
Using Fast Weights to Attend to the Recent Past
[ { "docid": "592eddc5ada1faf317571e8050d4d82e", "text": "Connectionist models usually have a single weight on each connection. Some interesting new properties emerge if each connection has two weights: A slowly changing, plastic weight which stores long-term knowledge and a fast-changing, elastic weight which stores temporary knowledge and spontaneously decays towards zero. If a network learns a set of associations and then these associations are \"blurred\" by subsequent learning, all the original associations can be \"deblurred\" by rehearsing on just a few of them. The rehearsal allows the fast weights to take on values that temporarily cancel out the changes in the slow weights caused by the subsequent learning.", "title": "" } ]
[ { "docid": "a514a1a4d6eee4a4938d9971e3dc1ea0", "text": "When patients find their pain unacceptable they are likely to attempt to avoid it at all costs and seek readily available interventions to reduce or eliminate it. These efforts may not be in their best interest if the consequences include no reductions in pain and many missed opportunities for more satisfying and productive functioning. The purpose of this study was to examine acceptance of pain. One hundred and sixty adults with chronic pain provided responses to a questionnaire assessing acceptance of pain, and a number of other questionnaires assessing their adjustment to pain. Correlational analyses showed that greater acceptance of pain was associated with reports of lower pain intensity, less pain-related anxiety and avoidance, less depression, less physical and psychosocial disability, more daily uptime, and better work status. A relatively low correlation between acceptance and pain intensity showed that acceptance is not simply a function of having a low level of pain. Regression analyses showed that acceptance of pain predicted better adjustment on all other measures of patient function, independent of perceived pain intensity. These results are preliminary. Further study will be needed to show for whom and under what circumstances, accepting some aspects of the pain experience may be beneficial.", "title": "" }, { "docid": "7436bf163d0dcf6d2fbe8ccf66431caf", "text": "Zh h{soruh ehkdylrudo h{sodqdwlrqv iru vxe0rswlpdo frusrudwh lqyhvwphqw ghflvlrqv1 Irfxvlqj rq wkh vhqvlwlylw| ri lqyhvwphqw wr fdvk rz/ zh dujxh wkdw shuvrqdo fkdudfwhulvwlfv ri fklhi h{hfxwlyh r fhuv/ lq sduwlfxodu ryhufrq ghqfh/ fdq dffrxqw iru wklv zlghvsuhdg dqg shuvlvwhqw lqyhvwphqw glvwruwlrq1 Ryhufrq ghqw FHRv ryhuhvwlpdwh wkh txdolw| ri wkhlu lqyhvwphqw surmhfwv dqg ylhz h{whuqdo qdqfh dv xqgxo| frvwo|1 Dv d uhvxow/ wkh| lqyhvw pruh zkhq wkh| kdyh lqwhuqdo ixqgv dw wkhlu glvsrvdo1 Zh whvw wkh ryhufrq ghqfh k|srwkhvlv/ xvlqj gdwd rq shuvrqdo sruwirolr dqg frusrudwh lqyhvwphqw ghflvlrqv ri FHRv lq Iruehv 833 frpsdqlhv1 Zh fodvvli| FHRv dv ryhufrq ghqw li wkh| uhshdwhgo| idlo wr h{huflvh rswlrqv wkdw duh kljko| lq wkh prqh|/ ru li wkh| kdelwxdoo| dftxluh vwrfn ri wkhlu rzq frpsdq|1 Wkh pdlq uhvxow lv wkdw lqyhvwphqw lv vljql fdqwo| pruh uhvsrqvlyh wr fdvk rz li wkh FHR glvsod|v ryhufrq ghqfh1 Lq dgglwlrq/ zh lghqwli| shuvrqdo fkdudfwhulvwlfv rwkhu wkdq ryhufrq ghqfh +hgxfdwlrq/ hpsor|phqw edfnjurxqg/ frkruw/ plolwdu| vhuylfh/ dqg vwdwxv lq wkh frpsdq|, wkdw vwurqjo| d hfw wkh fruuhodwlrq ehwzhhq lqyhvwphqw dqg fdvk rz1", "title": "" }, { "docid": "b138b6f852aaebaa009952131f2619b2", "text": "The goal of this thesis proposal is injecting knowledge/constraints into neural models, primarily for natural language processing (NLP) tasks. While neural models have set new state of the art performance in many tasks from vision to NLP, they often fail to learn simple rules necessary for well-formed structures unless there are immense amount of training data. The thesis proposes that not all the aspects of the model have to be learned from the data itself and injecting simple knowledge/constraints into the neural models can help low-resource tasks as well as improving state-of-the-art models. The proposal focuses on the structural knowledge of the output space and injects knowledge of correct or preferred structures as an objective to the model without modification to the model structure in a model-agnostic way. The first benefit in focusing on the knowledge of output space is that it is intuitive as we can directly enforce outputs to satisfy logical/linguistic constraints. Another advantage of structural knowledge is that it often does not require labeled dataset. Focusing on deterministic constraints on the output values, this thesis proposal first applies output constraints on inference time via proposed gradient-based inference (GBI) method. In the spirit of gradient-based training, GBI enforces constraints for each input at test-time by optimizing continuous model weights until the network’s inference procedure generates an output that satisfies the constraints. The proposal shows that constraint injection on inference-time can be extended to the training time: from instance-based optimization on test time to generalization to multiple instances in training time. In training with structural constraints, the thesis proposal presents (1) structural constraint loss, (2) joint objective of structural loss and supervised loss on training set and lastly (3) joint objective on semi-supervised setting. All the loss functions show improvements and the (3) semi-supervised approach shows the largest improvement, particularly effective on low-resource setting, among them. The analysis shows that the efforts on training time and on inference time are complementary rather than exclusive: the performance is best when efforts on train-time and inference-time methods are combined. Lastly, the thesis proposes to extend the completed work to generalized spanbased models and to domain adaptation where target domain is unlabeled. Moreover, the thesis proposal promises to explore additional methodology that might bring bigger gains through constraint injection compared to currently proposed approaches. In the next sections, I provide brief description on the applications I worked with the constraints I injected per problem. November 2, 2018 DRAFT", "title": "" }, { "docid": "2316c2c0115dd0d59f5a0a3c44a246d7", "text": "Today's organizations are highly dependent on information management and processes. Information security is one of the top issues for researchers and practitioners. In literature, there is consent that employees are the weakest link in IS security. A variety of researchers discuss explanations for employees' security related awareness and behavior. This paper presents a theory-based literature review of the extant approaches used within employees' information security awareness and behavior research over the past decade. In total, 113 publications were identified and analyzed. The information security research community covers 54 different theories. Focusing on the four main behavioral theories, a state-of-the-art overview of employees' security awareness and behavior research over the past decade is given. From there, gaps in existing research are uncovered and implications and recommendations for future research are discussed. The literature review might also be useful for practitioners that need information about behavioral factors that are critical to the success of a organization's security awareness.", "title": "" }, { "docid": "2f0769d0f3a1c29a3b794f964a2a560c", "text": "We propose a statistical method based on graphical Gaussian models for estimating large gene networks from DNA microarray data. In estimating large gene networks, the number of genes is larger than the number of samples, we need to consider some restrictions for model building. We propose weighted lasso estimation for the graphical Gaussian models as a model of large gene networks. In the proposed method, the structural learning for gene networks is equivalent to the selection of the regularization parameters included in the weighted lasso estimation. We investigate this problem from a Bayes approach and derive an empirical Bayesian information criterion for choosing them. Unlike Bayesian network approach, our method can find the optimal network structure and does not require to use heuristic structural learning algorithm. We conduct Monte Carlo simulation to show the effectiveness of the proposed method. We also analyze Arabidopsis thaliana microarray data and estimate gene networks.", "title": "" }, { "docid": "c1ca3f495400a898da846bdf20d23833", "text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.", "title": "" }, { "docid": "dc98ddb6033ca1066f9b0ba5347a3d0c", "text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.", "title": "" }, { "docid": "07cfc30244cb9269861a7db9ad594ad4", "text": "In this paper we report on results from a cross-sectional survey with manufacturers in four typical Chinese industries, i.e., power generating, chemical/petroleum, electrical/electronic and automobile, to evaluate their perceived green supply chain management (GSCM) practices and relate them to closing the supply chain loop. Our findings provide insights into the capabilities of Chinese organizations on the adoption of GSCM practices in different industrial contexts and that these practices are not considered equitably across the four industries. Academic and managerial implications of our findings are discussed. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6be97ac80738519792c02b033563efa7", "text": "Title of Document: SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT Stephan Charles Greene Doctor of Philosophy, 2007 Directed By: Professor Philip Resnik, Department of Linguistics and Institute for Advanced Computer Studies Current interest in automatic sentiment analysis i motivated by a variety of information requirements. The vast majority of work in sentiment analysis has been specifically targeted at detecting subjective state ments and mining opinions. This dissertation focuses on a different but related pro blem that to date has received relatively little attention in NLP research: detect ing implicit sentiment , or spin, in text. This text classification task is distinguished from ther sentiment analysis work in that there is no assumption that the documents to b e classified with respect to sentiment are necessarily overt expressions of opin ion. They rather are documents that might reveal a perspective . This dissertation describes a novel approach to t e identification of implicit sentiment, motivated by ideas drawn from the literature on lexical semantics and argument structure, supported and refined through psycholinguistic experimentation. A relationship pr edictive of sentiment is established for components of meaning that are thou g t to be drivers of verbal argument selection and linking and to be arbiters o f what is foregrounded or backgrounded in discourse. In computational experim nts employing targeted lexical selection for verbs and nouns, a set of features re flective of these components of meaning is extracted for the terms. As observable p roxies for the underlying semantic components, these features are exploited using mach ine learning methods for text classification with respect to perspective. After i nitial experimentation with manually selected lexical resources, the method is generaliz d to require no manual selection or hand tuning of any kind. The robustness of this lin gu stically motivated method is demonstrated by successfully applying it to three d istinct text domains under a number of different experimental conditions, obtain ing the best classification accuracies yet reported for several sentiment class ification tasks. A novel graph-based classifier combination method is introduced which f urther improves classification accuracy by integrating statistical classifiers wit h models of inter-document relationships. SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT", "title": "" }, { "docid": "a27fa3c9af957a964c904c1b456f5184", "text": "The real world use and design of personal informatics has been increasingly explored in HCI research in the last five years. However, personal informatics research is still a young multidisciplinary area of concern facing unrecognised methodological differences and offering unarticulated design challenges. In this review, we analyse how personal informatics has been approached so far using the Grounded Theory Literature Review method. We identify a (1) psychologically, (2) phenomenologically, and (3) humanistically informed stream and provide guidance on the design of future personal informatics systems by mapping out rising concerns and emerging research directions.", "title": "" }, { "docid": "a713b20398c1eb4d8490ccf2681a748f", "text": "The discovery of liposome or lipid vesicle emerged from self forming enclosed lipid bi-layer upon hydration; liposome drug delivery systems have played a significant role in formulation of potent drug to improve therapeutics. Recently the liposome formulations are targeted to reduce toxicity and increase accumulation at the target site. There are several new methods of liposome preparation based on lipid drug interaction and liposome disposition mechanism including the inhibition of rapid clearance of liposome by controlling particle size, charge and surface hydration. Most clinical applications of liposomal drug delivery are targeting to tissue with or without expression of target recognition molecules on lipid membrane. The liposomes are characterized with respect to physical, chemical and biological parameters. The sizing of liposome is also critical parameter which helps characterize the liposome which is usually performed by sequential extrusion at relatively low pressure through polycarbonate membrane (PCM). This mode of drug delivery lends more safety and efficacy to administration of several classes of drugs like antiviral, antifungal, antimicrobial, vaccines, anti-tubercular drugs and gene therapeutics. Present applications of the liposomes are in the immunology, dermatology, vaccine adjuvant, eye disorders, brain targeting, infective disease and in tumour therapy. The new developments in this field are the specific binding properties of a drug-carrying liposome to a target cell such as a tumor cell and specific molecules in the body (antibodies, proteins, peptides etc.); stealth liposomes which are especially being used as carriers for hydrophilic (water soluble) anticancer drugs like doxorubicin, mitoxantrone; and bisphosphonate-liposome mediated depletion of macrophages. This review would be a help to the researchers working in the area of liposomal drug delivery.", "title": "" }, { "docid": "e95bef9aac5bb118109d82dec750da26", "text": "A novel microstrip circular disc monopole antenna with a reconfigurable 10-dB impedance bandwidth is proposed in this communication for cognitive radios (CRs). The antenna is fed by a microstrip line integrated with a bandpass filter based on a three-line coupled resonator (TLCR). The reconfiguration of the filter enables the monopole antenna to operate at either a wideband state or a narrowband state by using a PIN diode. For the narrowband state, two varactor diodes are employed to change the antenna operating frequency from 3.9 to 4.82 GHz continuously, which is different from previous work using PIN diodes to realize a discrete tuning. Similar radiation patterns with low cross-polarization levels are achieved for the two operating states. Measured results on tuning range, radiation patterns, and realized gains are provided, which show good agreement with numerical simulations.", "title": "" }, { "docid": "28f106c6d6458f619cdc89967d5648cd", "text": "Term graphs constructed from document collections as well as external resources, such as encyclopedias (DBpedia) and knowledge bases (Freebase and ConceptNet), have been individually shown to be effective sources of semantically related terms for query expansion, particularly in case of difficult queries. However, it is not known how they compare with each other in terms of retrieval effectiveness. In this work, we use standard TREC collections to empirically compare the retrieval effectiveness of these types of term graphs for regular and difficult queries. Our results indicate that the term association graphs constructed from document collections using information theoretic measures are nearly as effective as knowledge graphs for Web collections, while the term graphs derived from DBpedia, Freebase and ConceptNet are more effective than term association graphs for newswire collections. We also found out that the term graphs derived from ConceptNet generally outperformed the term graphs derived from DBpedia and Freebase.", "title": "" }, { "docid": "27bc95568467efccb3e6cc185e905e42", "text": "Major studios and independent production firms (Indies) often have to select or “greenlight” a portfolio of scripts to turn into movies. Despite the huge financial risk at stake, there is currently no risk management tool they can use to aid their decisions, even though such a tool is sorely needed. In this paper, we developed a forecasting and risk management tool, based on movies scripts, to aid movie studios and production firms in their green-lighting decisions. The methodology developed can also assist outside investors if they have access to the scripts. Building upon and extending the previous literature, we extracted three levels of textual information (genre/content, bag-of-words, and semantics) from movie scripts. We then incorporate these textual variables as predictors, together with the contemplated production budget, into a BART-QL (Bayesian Additive Regression Tree for Quasi-Linear) model to obtain the posterior predictive distributions, rather than point forecasts, of the box office revenues for the corresponding movies. We demonstrate how the predictive distributions of box office revenues can potentially be used to help movie producers intelligently select their movie production portfolios based on their risk preferences, and we describe an illustrative analysis performed for an independent production firm.", "title": "" }, { "docid": "87a14f9cfdec433672095c2b0d9b9dde", "text": "This paper discusses a comprehensive suite of experiments that analyze the performance of the random forest (RF) learner implemented in Weka. RF is a relatively new learner, and to the best of our knowledge, only preliminary experimentation on the construction of random forest classifiers in the context of imbalanced data has been reported in previous work. Therefore, the contribution of this study is to provide an extensive empirical evaluation of RF learners built from imbalanced data. What should be the recommended default number of trees in the ensemble? What should the recommended value be for the number of attributes? How does the RF learner perform on imbalanced data when compared with other commonly-used learners? We address these and other related issues in this work.", "title": "" }, { "docid": "f71c8f16ffeaacf8e7d81b357957ad89", "text": "Multi-antenna technologies such as beamforming and Multiple-Input, Multiple-Output (MIMO) are anticipated to play a key role in “5G” systems, which are expected to be deployed in the year 2020 and beyond. With a class of 5G systems expected to be deployed in both cm-wave (3-30 GHz) and mm-wave (30-300 GHz) bands, the unique characteristics and challenges of those bands have prompted a revisiting of the design and performance tradeoffs associated with existing multi-antenna techniques in order to determine the preferred framework for deploying MIMO technology in 5G systems. In this paper, we discuss key implementation issues surrounding the deployment of transmit MIMO processing for 5G systems. We describe MIMO architectures where the transmit MIMO processing is implemented at baseband, RF, and a combination of RF and baseband (a hybrid approach). We focus on the performance and implementation issues surrounding several candidate techniques for multi-user-MIMO (MU-MIMO) transmission in the mm-wave bands.", "title": "" }, { "docid": "a878a2dbf66e9da4526f7d2926e497b2", "text": "As we previously reported, resonant frequency heart rate variability biofeedback increases baroreflex gain and peak expiratory flow in healthy individuals and has positive effects in treatment of asthma patients. Biofeedback readily produces large oscillations in heart rate, blood pressure, vascular tone, and pulse amplitude via paced breathing at the specific natural resonant frequency of the cardiovascular system for each individual. This paper describes how resonance properties of the cardiovascular system mediate the effects of heart rate variability biofeedback. There is evidence that resonant oscillations can train autonomic reflexes to provide therapeutic effect. The paper is based on studies described in previous papers. Here, we discuss the origin of the resonance phenomenon, describe our procedure for determining an individual's resonant frequency, and report data from 32 adult asthma patients and 24 healthy adult subjects, showing a negative relationship between resonant frequency and height, and a lower resonant frequency in men than women, but no relationship between resonant frequency and age, weight, or presence of asthma. Resonant frequency remains constant across 10 sessions of biofeedback training. It appears to be related to blood volume.", "title": "" }, { "docid": "36bc32033cbecf8ee00c5ec84ef26cfa", "text": "Most of the device's technology has been moving towards the complex and produce of Nano-IC with demands for cheaper cost, smaller size and better thermal and electrical performance. One of the marketable packages is Quad Flat No-Lead (QFN) package. Due to the high demand of miniaturization of electronic products, QFN development becomes more promising, such as the lead frame design with half edge, cheaper tape, shrinkage of package size as to achieve more units per lead frame (cost saving) and etc [1]. The improvement methods in the lead frame design, such as lead frame metal tie bar and half edge features are always the main challenges for QFN package. With reduced the size of metal tie bar, it will fasten the package singulation process, whereas the half edge is designed for the mold compound locking for delamination reduction purpose. This paper specifically will discuss how the critical wire bonding parameters, capillary design and environmental conditions interact each other result to the unstable leads (second bond failures). During the initial evaluation of new package SOT1261 with rough PPF lead frame, several short tails and fish tails observed on wedge bond when applied with the current parameter setting which have been qualified in other packages with same wire size (18um Au wire). These problems did not surface out in earlier qualified devices mainly due to the second bond parameter robustness, capillary designs, lead frame design changes, different die packages, lead frame batches and contamination levels. One of the main root cause been studied is the second bond parameter setting which is not robust enough for the flimsy lead frame. The new bonding methodology, with the concept of low base ultrasonic and high force setting applied together with scrubbing mechanism to eliminate the fish tail bond and also reduce short tail occurrence on wedge. Wire bond parameters optimized to achieve zero fish tail, and wedge pull reading with >4.0gf. Destructive test such as wedge pull test used to test the bonding quality. Failure modes are analyzed using high power optical scope microscope and Scanning Electronic Microscope (SEM). By looking through into all possible root causes, and identifying how the factors are interacting, some efforts on the Design of Experiments (DOE) are carried out and good solutions were implemented.", "title": "" }, { "docid": "2d2465aff21421330f82468858a74cf4", "text": "There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.", "title": "" }, { "docid": "be9b40cc2e2340249584f7324e26c4d3", "text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.", "title": "" } ]
scidocsrr
71fdf0da06e7693d69fe8ae6d4fa63d8
Out-Of-Core Algorithms for Scientific Visualization and Computer Graphics
[ { "docid": "1d8db3e4aada7f5125cd72df4dfab1f4", "text": "Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.", "title": "" } ]
[ { "docid": "d0649a8b51f61ead177dc60838d749b4", "text": "Reduction otoplasty is an uncommon procedure performed for macrotia and ear asymmetry. Techniques described in the literature for this procedure are few. The authors present their ear reduction approach that not only achieves the desired reduction effectively and accurately, but also addresses and creates the natural anatomic proportions of the ear, leaving a scar well hidden within the fold of the helix.", "title": "" }, { "docid": "0c2e489edeac2c8ad5703eda644edfac", "text": "Nowadays, more and more decision procedures are supported or even guided by automated processes. An important technique in this automation is data mining. In this chapter we study how such automatically generated decision support models may exhibit discriminatory behavior towards certain groups based upon, e.g., gender or ethnicity. Surprisingly, such behavior may even be observed when sensitive information is removed or suppressed and the whole procedure is guided by neutral arguments such as predictive accuracy only. The reason for this phenomenon is that most data mining methods are based upon assumptions that are not always satisfied in reality, namely, that the data is correct and represents the population well. In this chapter we discuss the implicit modeling assumptions made by most data mining algorithms and show situations in which they are not satisfied. Then we outline three realistic scenarios in which an unbiased process can lead to discriminatory models. The effects of the implicit assumptions not being fulfilled are illustrated by examples. The chapter concludes with an outline of the main challenges and problems to be solved.", "title": "" }, { "docid": "562a86a07858a118fd5beef075247341", "text": "Despite the criticism concerning the value of TV content, research reveals several worthwhile aspects -- one of them is the opportunity to learn. In this article we explore the characteristics of interactive TV applications that facilitate education and interactive entertainment. In doing so we analyze research methods and empirical results from experimental and field studies. The findings suggest that interactive TV applications provide support for education and entertainment for children and young people, as well as continuous education for all. In particular, interactive TV is especially suitable for (1) informal learning and (2) for engaging and motivating its audience. We conclude with an agenda for future interactive TV research in entertainment and education.", "title": "" }, { "docid": "5855428c40fd0e25e0d05554d2fc8864", "text": "When the landmark patient Phineas Gage died in 1861, no autopsy was performed, but his skull was later recovered. The brain lesion that caused the profound personality changes for which his case became famous has been presumed to have involved the left frontal region, but questions have been raised about the involvement of other regions and about the exact placement of the lesion within the vast frontal territory. Measurements from Gage's skull and modern neuroimaging techniques were used to reconstitute the accident and determine the probable location of the lesion. The damage involved both left and right prefrontal cortices in a pattern that, as confirmed by Gage's modern counterparts, causes a defect in rational decision making and the processing of emotion.", "title": "" }, { "docid": "930515101a83dd668ef6769c9626416c", "text": "Users speaking different languages may prefer different patterns in creating their passwords, and thus knowledge on English passwords cannot help to guess passwords from other languages well. Research has already shown Chinese passwords are one of the most difficult ones to guess. We believe that the conclusion is biased because, to the best of our knowledge, little empirical study has examined regional differences of passwords on a large scale, especially on Chinese passwords. In this paper, we study the differences between passwords from Chinese and English speaking users, leveraging over 100 million leaked and publicly available passwords from Chinese and international websites in recent years. We found that Chinese prefer digits when composing their passwords while English users prefer letters, especially lowercase letters. However, their strength against password guessing is similar. Second, we observe that both users prefer to use the patterns that they are familiar with, e.g., Chinese Pinyins for Chinese and English words for English users. Third, we observe that both Chinese and English users prefer their conventional format when they use dates to construct passwords. Based on these observations, we improve a PCFG (Probabilistic Context-Free Grammar) based password guessing method by inserting Pinyins (about 2.3% more entries) into the attack dictionary and insert our observed composition rules into the guessing rule set. As a result, our experiments show that the efficiency of password guessing increases by 34%.", "title": "" }, { "docid": "9c9b05cfa115198706cad6dba4f976cd", "text": "Despite decades of research into how professional programmers debug, only recently has work emerged about how end-user programmers attempt to debug programs. Without this knowledge, we cannot build tools to adequately support their needs. This article reports the results of a detailed qualitative empirical study of end-user programmers' sensemaking about a spreadsheet's correctness. Using our study's data, we derived a sensemaking model for end-user debugging and categorized participants' activities and verbalizations according to this model, allowing us to investigate how participants went about debugging. Among the results are identification of the prevalence of information foraging during end-user debugging, two successful strategies for traversing the sensemaking model, potential ties to gender differences in the literature, sensemaking sequences leading to debugging progress, and sequences tied with troublesome points in the debugging process. The results also reveal new implications for the design of spreadsheet tools to support end-user programmers' sensemaking during debugging.", "title": "" }, { "docid": "fe41b1a74e7ae8eff09e37d4766d6bac", "text": "This paper deals with the characterization of the feasible workspace of set-point control for a cable suspended robot. The motivation behind this work is to find admissible set points for the system under disturbances as well as input constraints. The main ideas are: (i) designing a sliding mode controller as a stabilizing controller for the given uncertain system, (ii) finding the range of system states in terms of set points by analyzing the reaching condition and sliding mode, and (iii) substituting states in inequalities of the input with either their upper values or lower values so that constraints are satisfied. This method results in 6 inequalities in terms of set point which can be drawn graphically in the 3-dimensional space.", "title": "" }, { "docid": "9ae543f8f58d9ee8e8895aa1f463bd46", "text": "In this paper, we investigate the resource allocation problem for unmanned aerial vehicle (UAV)-assisted networks, where a UAV acting as an energy source provides radio frequency energy for multiple energy harvesting-powered device-to-device (D2D) pairs with much information to be transmitted. The goal is to maximize the average throughput within a time horizon while satisfying the energy causality constraint under a generalized harvest-transmit-store model, which results in a non-convex problem. By introducing the Lagrangian relaxation method, we analytically show that the behavior of all D2D pairs at each time slot is exclusive: harvesting energy or transmitting information signals. The formulated non-convex optimization problem is thus transformed into a mixed integer nonlinear programming (MINIP). We then design an efficient resource allocation algorithm to solve this MINIP, where D.C. (difference of two convex functions) programming and golden section method are combined to achieve a suboptimal solution. Furthermore, we provide an idea to reduce the computational complexity for facilitating the application in practice. Simulations are conducted to validate the effectiveness of the proposed algorithm and evaluate the system throughput performance.", "title": "" }, { "docid": "11ea49c1f38ea5d24bc0eae09de3d8cb", "text": "BACKGROUND\nUnderprojection and lack of tip definition often coexist. Techniques that improve both nasal tip refinement and projection are closely interrelated, and an algorithmic approach can be developed to improve the predictability of the dynamic changes that occur. Use of nondestructive and nonpalpable techniques that enhance nasal tip shape are emphasized.\n\n\nMETHODS\nA retrospective review of primary rhinoplasty patients was undertaken to delineate the precise role of preoperative analysis, intraoperative evaluation, and execution of specific surgical techniques in creating nasal tip refinement and projection. Specific case studies are used to demonstrate the efficacy and predictability of these maneuvers.\n\n\nRESULTS\nSuccessful tip refinement and projection depends on (1) proper preoperative analysis of the deformity; (2) a fundamental understanding of the intricate and dynamic relationships between tip-supporting structures that contribute to nasal tip shape and projection; and (3) execution of the operative plan using controlled, nondestructive, and predictable surgical techniques.\n\n\nCONCLUSIONS\nA simplified algorithmic approach to creating aesthetic nasal tip shape and projection in primary rhinoplasty has been established to aid the rhinoplasty surgeon in reducing the inherent unpredictability of combined techniques and improving long-term aesthetic outcomes.", "title": "" }, { "docid": "4daad9b24e477160999f350043125116", "text": "Recent research studied the problem of publishing microdata without revealing sensitive information, leading to the privacy preserving paradigms of k-anonymity and `-diversity. k-anonymity protects against the identification of an individual’s record. `-diversity, in addition, safeguards against the association of an individual with specific sensitive information. However, existing approaches suffer from at least one of the following drawbacks: (i) The information loss metrics are counter-intuitive and fail to capture data inaccuracies inflicted for the sake of privacy. (ii) `-diversity is solved by techniques developed for the simpler k-anonymity problem, which introduces unnecessary inaccuracies. (iii) The anonymization process is inefficient in terms of computation and I/O cost. In this paper we propose a framework for efficient privacy preservation that addresses these deficiencies. First, we focus on one-dimensional (i.e., single attribute) quasiidentifiers, and study the properties of optimal solutions for k-anonymity and `-diversity, based on meaningful information loss metrics. Guided by these properties, we develop efficient heuristics to solve the one-dimensional problems in linear time. Finally, we generalize our solutions to multi-dimensional quasi-identifiers using space-mapping techniques. Extensive experimental evaluation shows that our techniques clearly outperform the state-of-the-art, in terms of execution time and information loss.", "title": "" }, { "docid": "83bec63fb2932aec5840a9323cc290b4", "text": "This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem. Clothing parsing requires higher-level knowledge on clothing semantics and contextual cues to disambiguate fine-grained categories. We extend FCN architecture with a side-branch network which we refer outfit encoder to predict a consistent set of clothing labels to encourage combinatorial preference, and with conditional random field (CRF) to explicitly consider coherent label assignment to the given image. The empirical results using Fashionista and CFPD datasets show that our model achieves state-of-the-art performance in clothing parsing, without additional supervision during training. We also study the qualitative influence of annotation on the current clothing parsing benchmarks, with our Web-based tool for multi-scale pixel-wise annotation and manual refinement effort to the Fashionista dataset. Finally, we show that the image representation of the outfit encoder is useful for dress-up image retrieval application.", "title": "" }, { "docid": "eea39d8d330abc540b0cf782a0bc605f", "text": "0195-6663/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.appet.2011.09.019 q Acknowledgements: The author wishes to thank Ste Paul Rozin for their invaluable feedback on earlier dr E-mail address: matt@psych.ubc.ca Vegetarianism, the practice of abstaining from eating meat, has a recorded history dating back to ancient Greece. Despite this, it is only in recent years that researchers have begun conducting empirical investigations of the practices and beliefs associated with vegetarianism. The present article reviews the extant literature, exploring variants of and motivations for vegetarianism, differences in attitudes, values and worldviews between omnivores and vegetarians, as well as the pronounced gender differences in meat consumption and vegetarianism. Furthermore, the review highlights the extremely limited cultural scope of the present data, and calls for a broader investigation across non-Western cultures. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "641a98a0f0b1ac4d382379271dedfbef", "text": "The image captured in water is hazy due to the several effects of the underwater medium. These effects are governed by the suspended particles that lead to absorption and scattering of light during image formation process. The underwater medium is not friendly for imaging data and brings low contrast and fade color issues. Therefore, during any image based exploration and inspection activity, it is essential to enhance the imaging data before going for further processing. This paper presents a wavelet-based fusion method to enhance the hazy underwater images by addressing the low contrast and color alteration issues. The publicly available hazy underwater images are enhanced and analyzed qualitatively with some state of the art methods. The quantitative study of image quality depicts promising results.", "title": "" }, { "docid": "26ee1e5770a77d030b6230b8eef7e644", "text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.", "title": "" }, { "docid": "fd1d1407f8911c8782c5bd41974371e7", "text": "This communication presents the design of down tilted 3D Taper Slot Antennas suitable for base stations of future generation mobile communication. We designed a novel Vivaldi antenna with wide frequency band performance covering a larger band than the 4th Generation with a vertical tilting to suit the bases station coverage requirements. A downscaled version of this metal only antenna is advantageous for higher frequency bands of the 5th Generation. The available space inside of the 3D structure is suitable to integrate low noise amplifier as near as possible to the feeding point minimizing the losses in the cable while maintaining good performance in terms of gain and efficiency. Circular antenna array of (4G) and (5G) antenna elements are simulated in terms of reflection coefficient and inter-element coupling. We developed and measured a prototype of a single (4G) antenna. Good agreement is obtained between simulation and measurements.", "title": "" }, { "docid": "f5e6df40898a5b84f8e39784f9b56788", "text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.", "title": "" }, { "docid": "9cc020e44fd9465206a72037834ec33a", "text": "Recent approaches to global illumination for dynamic scenes achieve interactive frame rates by using coarse approximations to geometry, lighting, or both, which limits scene complexity and rendering quality. High-quality global illumination renderings of complex scenes are still limited to methods based on ray tracing. While conceptually simple, these techniques are computationally expensive. We present an efficient and scalable method to compute global illumination solutions at interactive rates for complex and dynamic scenes. Our method is based on parallel final gathering running entirely on the GPU. At each final gathering location we perform micro-rendering: we traverse and rasterize a hierarchical point-based scene representation into an importance-warped micro-buffer, which allows for BRDF importance sampling. The final reflected radiance is computed at each gathering location using the micro-buffers and is then stored in image-space. We can trade quality for speed by reducing the sampling rate of the gathering locations in conjunction with bilateral upsampling. We demonstrate the applicability of our method to interactive global illumination, the simulation of multiple indirect bounces, and to final gathering from photon maps.", "title": "" }, { "docid": "e44f67fec39390f215b5267c892d1a26", "text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.", "title": "" }, { "docid": "5eea2c2a57d85c100f4e821759610260", "text": "This paper presents an overview of a multistage signal processing framework to tackle the main challenges in continuous control protocols for motor imagery based synchronous and self-paced BCIs. The BCI can be setup rapidly and automatically even when conducting an extensive search for subject-specific parameters. A new BCI-based game training paradigm which enables assessment of continuous control performance is also introduced. A range of offline results and online analysis of the new game illustrate the potential for the proposed BCI and the advantages of using the game as a BCI training paradigm.", "title": "" } ]
scidocsrr
b14de836c62d0b2bfe5ca9f70ad45150
Robot Representing and Reasoning with Knowledge from Reinforcement Learning
[ { "docid": "006347cd3839d9fabd983e7cc379322d", "text": "Recent progress in both Artificial Intelligence (AI) and Robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially Human-Robot Interaction (HRI) for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (i) execute action sequences to complete user requests, (ii) efficiently ask questions to resolve user requests, (iii) understand human commands given in natural language, and (iv) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform.", "title": "" } ]
[ { "docid": "45636bc97812ecfd949438c2e8ee9d52", "text": "Single-image super-resolution is a fundamental task for vision applications to enhance the image quality with respect to spatial resolution. If the input image contains degraded pixels, the artifacts caused by the degradation could be amplified by superresolution methods. Image blur is a common degradation source. Images captured by moving or still cameras are inevitably affected by motion blur due to relative movements between sensors and objects. In this work, we focus on the super-resolution task with the presence of motion blur. We propose a deep gated fusion convolution neural network to generate a clear high-resolution frame from a single natural image with severe blur. By decomposing the feature extraction step into two task-independent streams, the dualbranch design can facilitate the training process by avoiding learning the mixed degradation all-in-one and thus enhance the final high-resolution prediction results. Extensive experiments demonstrate that our method generates sharper super-resolved images from low-resolution inputs with high computational efficiency.", "title": "" }, { "docid": "9ec10477ba242675c8bad3a1ca335b38", "text": "PURPOSE\nThis paper explores the importance of family daily routines and rituals for the family's functioning and sense of identity.\n\n\nMETHODS\nThe findings of this paper are derived from an analysis of the morning routines of 40 families with children with disabilities in the United States and Canada. The participants lived in urban and rural areas. Forty of the 49 participants were mothers and the majority of the families were of European descent. Between one and four interviews were conducted with each participant. Topics included the family's story, daily routines, and particular occupations. Data on the morning routines of the families were analyzed for order and affective and symbolic meaning using a narrative approach.\n\n\nFINDINGS\nThe findings are presented as narratives of morning activities in five families. These narratives are examples for rituals, routines, and the absence of a routine. Rituals are discussed in terms of their affective and symbolic qualities, routines are discussed in terms of the order they give to family life, whereas the lack of family routine is discussed in terms of lack of order in the family.\n\n\nCONCLUSIONS\nFamily routines and rituals are organizational and meaning systems that may affect family's ability to adapt them.", "title": "" }, { "docid": "ea048488791219be809072862a061444", "text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .", "title": "" }, { "docid": "3cc97542631d734d8014abfbef652c79", "text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.", "title": "" }, { "docid": "b642ed021d27c3df42e44eb2b033b8a3", "text": "5G wireless technology is paving the way to revolutionize future ubiquitous and pervasive networking, wireless applications, and user quality of experience. To realize its potential, 5G must provide considerably higher network capacity, enable massive device connectivity with reduced latency and cost, and achieve considerable energy savings compared to existing wireless technologies. The main objective of this article is to explore the potential of NFV in enhancing 5G radio access networks' functional, architectural, and commercial viability, including increased automation, operational agility, and reduced capital expenditure. The ETSI NFV Industry Specification Group has recently published drafts focused on standardization and implementation of NFV. Harnessing the potential of 5G and network functions virtualization, we discuss how NFV can address critical 5G design challenges through service abstraction and virtualized computing, storage, and network resources. We describe NFV implementation with network overlay and SDN technologies. In our discussion, we cover the first steps in understanding the role of NFV in implementing CoMP, D2D communication, and ultra densified networks.", "title": "" }, { "docid": "1c6078d68891b6600727a82841812666", "text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.", "title": "" }, { "docid": "2426809f360a03f0299bc7d96ebb9d41", "text": "Ontology Learning (OL) from a text is a process that consists of text processing, knowledge extraction, and ontology construction. For Arabic language, text processing, and knowledge extraction tasks are not mature as for Latin languages. They have not been integrated into the full Arabic OL pipeline. Currently, there is very little automated support for using knowledge from Arabic literature in semantically-enabled systems. This paper demonstrates the feasibility of using some existing OL methods for Arabic text and elicits proposals for further work toward building open domain OL systems for Arabic. This is done by building an OL system based on some available NLP tools for Arabic text utilizing GATE text analysis system for corpus and annotation management. The prototype is evaluated similarly to other OL systems and its performance is promising and recommended to enable more effective research and application of Arabic ontology learning.", "title": "" }, { "docid": "10646c29afc4cc5c0a36ca508aabb41a", "text": "As high-resolution fingerprint images are becoming more common, the pores have been found to be one of the promising candidates in improving the performance of automated fingerprint identification systems (AFIS). This paper proposes a deep learning approach towards pore extraction. It exploits the feature learning and classification capability of convolutional neural networks (CNNs) to detect pores on fingerprints. Besides, this paper also presents a unique affine Fourier moment-matching (AFMM) method of matching and fusing the scores obtained for three different fingerprint features to deal with both local and global linear distortions. Combining the two aforementioned contributions, an EER of 3.66% can be observed from the experimental results.", "title": "" }, { "docid": "b01232448a782e0a2a01acba4b8ff8db", "text": "Complex event processing (CEP) middleware systems are increasingly adopted to implement distributed applications: they not only dispatch events across components, but also embed part of the application logic into declarative rules that detect situations of interest from the occurrence of specific pattern of events. While this approach simplifies the development of large scale event processing applications, writing the rules that correctly capture the application domain arguably remains a difficult and error prone task, which fundamentally lacks consolidated tool support.\n Moving from these premises, this paper introduces CAVE, an efficient approach and tool to support developers in analyzing the behavior of an event processing application. CAVE verifies properties based on the adopted CEP ruleset and on the environmental conditions, and outputs sequences of events that prove the satisfiability or unsatisfiability of each property. The key idea that contributes to the efficiency of CAVE is the translation of the property checking task into a set of constraint solving problems. The paper presents the CAVE approach in detail, describes its prototype implementation and evaluates its performance in a wide range of scenarios.", "title": "" }, { "docid": "595afbb693585eb599a3e4ea8e65807a", "text": "Hypoglycemia is a major challenge of artificial pancreas systems and a source of concern for potential users and parents of young children with Type 1 diabetes (T1D). Early alarms to warn the potential of hypoglycemia are essential and should provide enough time to take action to avoid hypoglycemia. Many alarm systems proposed in the literature are based on interpretation of recent trends in glucose values. In the present study, subject-specific recursive linear time series models are introduced as a better alternative to capture glucose variations and predict future blood glucose concentrations. These models are then used in hypoglycemia early alarm systems that notify patients to take action to prevent hypoglycemia before it happens. The models developed and the hypoglycemia alarm system are tested retrospectively using T1D subject data. A Savitzky-Golay filter and a Kalman filter are used to reduce noise in patient data. The hypoglycemia alarm algorithm is developed by using predictions of future glucose concentrations from recursive models. The modeling algorithm enables the dynamic adaptation of models to inter-/intra-subject variation and glycemic disturbances and provides satisfactory glucose concentration prediction with relatively small error. The alarm systems demonstrate good performance in prediction of hypoglycemia and ultimately in prevention of its occurrence.", "title": "" }, { "docid": "230d380cbe134f01f3711309d8cc8e35", "text": "For privacy concerns to be addressed adequately in today’s machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.", "title": "" }, { "docid": "f8c3d3211b1a79cb6ef3fa036a849535", "text": "Income is known to be associated with happiness 1 , but debates persist about the exact nature of this relationship 2,3 . Does happiness rise indefinitely with income, or is there a point at which higher incomes no longer lead to greater well-being? We examine this question using data from the Gallup World Poll, a representative sample of over 1.7 million individuals worldwide. Controlling for demographic factors, we use spline regression models to statistically identify points of ‘income satiation’. Globally, we find that satiation occurs at $95,000 for life evaluation and $60,000 to $75,000 for emotional well-being. However, there is substantial variation across world regions, with satiation occurring later in wealthier regions. We also find that in certain parts of the world, incomes beyond satiation are associated with lower life evaluations. These findings on income and happiness have practical and theoretical significance at the individual, institutional and national levels. They point to a degree of happiness adaptation 4,5 and that money influences happiness through the fulfilment of both needs and increasing material desires 6 . Jebb et al. use data from the Gallup World Poll to show that happiness does not rise indefinitely with income: globally, income satiation occurs at US$95,000 for life evaluation and US$60,000 to US$75,000 for emotional well-being.", "title": "" }, { "docid": "cdf88e5f188e80a1161f644de35502b0", "text": "Bacteria and archaea have evolved adaptive immune defenses, termed clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) systems, that use short RNA to direct degradation of foreign nucleic acids. Here, we engineer the type II bacterial CRISPR system to function with custom guide RNA (gRNA) in human cells. For the endogenous AAVS1 locus, we obtained targeting rates of 10 to 25% in 293T cells, 13 to 8% in K562 cells, and 2 to 4% in induced pluripotent stem cells. We show that this process relies on CRISPR components; is sequence-specific; and, upon simultaneous introduction of multiple gRNAs, can effect multiplex editing of target loci. We also compute a genome-wide resource of ~190 K unique gRNAs targeting ~40.5% of human exons. Our results establish an RNA-guided editing tool for facile, robust, and multiplexable human genome engineering.", "title": "" }, { "docid": "37feedcb9e527601cb28fe59b2526ab3", "text": "In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).", "title": "" }, { "docid": "3a709dd22392905d05fd4d737597ad4d", "text": "Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.", "title": "" }, { "docid": "01c74ed5d1bb9020b0bdbe424d5ea566", "text": "Pressure~density! and velocity boundary conditions are studied for 2-D and 3-D lattice Boltzmann BGK models~LBGK! and a new method to specify these conditions is proposed. These conditions are constructed in consistency with the wall boundary condition, based on the idea of bounceback of the non-equilibrium distribution. When these conditions are used together with the incompressible LBGK model @J. Stat. Phys.81, 35 ~1995!# the simulation results recover the analytical solution of the plane Poiseuille flow driven by a pressure ~density! difference. The half-way wall bounceback boundary condition is also used with the pressure ~density! inlet/outlet conditions proposed in this paper and in Phys. Fluids 8, 2527~1996! to study 2-D Poiseuille flow and 3-D square duct flow. The numerical results are approximately second-order accurate. The magnitude of the error of the half-way wall bounceback boundary condition is comparable with that of other published boundary conditions and it has better stability behavior. © 1997 American Institute of Physics. @S1070-6631 ~97!03406-5#", "title": "" }, { "docid": "59c757aa28dcb770ecf5b01dc26ba087", "text": "Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).", "title": "" }, { "docid": "4e2bed31e5406e30ae59981fa8395d5b", "text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.", "title": "" }, { "docid": "ba3e1e2996e3c2a736bd090605b59ee3", "text": "Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance.", "title": "" }, { "docid": "ba25169eb613823f08f191a6635b3b6c", "text": "The amount of time allocated to physical activity in schools is declining. Time-efficient physical activity solutions that demonstrate their impact on academic achievement-related outcomes are needed to prioritize physical activity within the school curricula. \"FUNtervals\" are 4-min, high-intensity interval activities that use whole-body actions to complement a storyline. The purpose of this study was to (i) explore whether FUNtervals can improve selective attention, an executive function posited to be essential for learning and academic success; and (ii) examine whether this relationship is predicted by students' classroom off-task behaviour. Seven grade 3-5 classes (n = 88) were exposed to a single-group, repeated cross-over design where each student's selective attention was compared between no-activity and FUNtervals days. In week 1, students were familiarized with the d2 test of attention and FUNterval activities, and baseline off-task behaviour was observed. In both weeks 2 and 3 students completed the d2 test of attention following either a FUNterval break or a no-activity break. The order of these breaks was randomized and counterbalanced between weeks. Neither motor nor passive off-task behaviour predicted changes in selective attention following FUNtervals; however, a weak relationship was observed for verbal off-task behaviour and improvements in d2 test performance. More importantly, students made fewer errors during the d2 test following FUNtervals. In supporting the priority of physical activity inclusion within schools, FUNtervals, a time efficient and easily implemented physical activity break, can improve selective attention in 9- to 11-year olds.", "title": "" } ]
scidocsrr
5b3c5a1409a66389a211bc0badb9b277
Radio Frequency ( RF ) Time-of-Flight Ranging for Wireless Sensor Networks
[ { "docid": "741ba628eacb59d7b9f876520406e600", "text": "Awareness of the physical location for each node is required by many wireless sensor network applications. The discovery of the position can be realized utilizing range measurements including received signal strength, time of arrival, time difference of arrival and angle of arrival. In this paper, we focus on localization techniques based on angle of arrival information between neighbor nodes. We propose a new localization and orientation scheme that considers beacon information multiple hops away. The scheme is derived under the assumption of noisy angle measurements. We show that the proposed method achieves very good accuracy and precision despite inaccurate angle measurements and a small number of beacons", "title": "" } ]
[ { "docid": "970b65468b6afdf572dd8759cea3f742", "text": "We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the \"follow-the-perturbed-leader\" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.", "title": "" }, { "docid": "c4f86b84282df841bd5ee7bcca3b01eb", "text": "Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground. Thresholding plays a major in binarization of images. Thresholding can be categorized into global thresholding and local thresholding. In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate. This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation. Normally the local mean computational time depends on the window size. Our technique uses integral sum image as a prior processing to calculate local mean. It does not involve calculations of standard deviations as in other local adaptive techniques. This along with the fact that calculations of mean is independent of window size speed up the process as compared to other local thresholding techniques.", "title": "" }, { "docid": "a5b1c9d83283153cb46f062efec49f10", "text": "We present our experience with QUIC, an encrypted, multiplexed, and low-latency transport protocol designed from the ground up to improve transport performance for HTTPS traffic and to enable rapid deployment and continued evolution of transport mechanisms. QUIC has been globally deployed at Google on thousands of servers and is used to serve traffic to a range of clients including a widely-used web browser (Chrome) and a popular mobile video streaming app (YouTube). We estimate that 7% of Internet traffic is now QUIC. We describe our motivations for developing a new transport, the principles that guided our design, the Internet-scale process that we used to perform iterative experiments on QUIC, performance improvements seen by our various services, and our experience deploying QUIC globally. We also share lessons about transport design and the Internet ecosystem that we learned from our deployment.", "title": "" }, { "docid": "09085472d12ed72d5c0fe27b5eb5e175", "text": "BACKGROUND\nUse of exergames can complement conventional therapy and increase the amount and intensity of visuospatial neglect (VSN) training. A series of 9 exergames-games based on therapeutic principles-aimed at improving exploration of the neglected space for patients with VSN symptoms poststroke was developed and tested for its feasibility.\n\n\nOBJECTIVES\nThe goal was to determine the feasibility of the exergames with minimal supervision in terms of (1) implementation of the intervention, including adherence, attrition and safety, and (2) limited efficacy testing, aiming to document possible effects on VSN symptoms in a case series of patients early poststroke.\n\n\nMETHODS\nA total of 7 patients attended the 3-week exergames training program on a daily basis. Adherence of the patients was documented in a training diary. For attrition, the number of participants lost during the intervention was registered. Any adverse events related to the exergames intervention were noted to document safety. Changes in cognitive and spatial exploration skills were measured with the Zürich Maxi Mental Status Inventory and the Neglect Test. Additionally, we developed an Eye Tracker Neglect Test (ETNT) using an infrared camera to detect and measure neglect symptoms pre- and postintervention.\n\n\nRESULTS\nThe median was 14 out of 15 (93%) attended sessions, indicating that the adherence to the exergames training sessions was high. There were no adverse events and no drop-outs during the exergame intervention. The individual cognitive and spatial exploration skills slightly improved postintervention (P=.06 to P=.98) and continued improving at follow-up (P=.04 to P=.92) in 5 out of 7 (71%) patients. Calibration of the ETNT was rather error prone. The ETNT showed a trend for a slight median group improvement from 15 to 16 total located targets (+6%).\n\n\nCONCLUSIONS\nThe high adherence rate and absence of adverse events showed that these exergames were feasible and safe for the participants. The results of the amount of exergames use is promising for future applications and warrants further investigations-for example, in the home setting of patients to augment training frequency and intensity. The preliminary results indicate the potential of these exergames to cause improvements in cognitive and spatial exploration skills over the course of training for stroke patients with VSN symptoms. Thus, these exergames are proposed as a motivating training tool to complement usual care. The ETNT showed to be a promising assessment for quantifying spatial exploration skills. However, further adaptations are needed, especially regarding calibration issues, before its use can be justified in a larger study sample.", "title": "" }, { "docid": "5e756f85b15812daf80221c8b9ae6a96", "text": "PURPOSE\nRural-dwelling cancer survivors (CSs) are at risk for decrements in health and well-being due to decreased access to health care and support resources. This study compares the impact of cancer in rural- and urban-dwelling adult CSs living in 2 regions of the Pacific Northwest.\n\n\nMETHODS\nA convenience sample of posttreatment adult CSs (N = 132) completed the Impact of Cancer version 2 (IOCv2) and the Memorial Symptom Assessment Scale-short form. High and low scorers on the IOCv2 participated in an in-depth interview (n = 19).\n\n\nFINDINGS\nThe sample was predominantly middle-aged (mean age 58) and female (84%). Mean time since treatment completion was 6.7 years. Cancer diagnoses represented included breast (56%), gynecologic (9%), lymphoma (8%), head and neck (6%), and colorectal (5%). Comparisons across geographic regions show statistically significant differences in body concerns, worry, negative impact, and employment concerns. Rural-urban differences from interview data include access to health care, care coordination, connecting/community, thinking about death and dying, public/private journey, and advocacy.\n\n\nCONCLUSION\nThe insights into the differences and similarities between rural and urban CSs challenge the prevalent assumptions about rural-dwelling CSs and their risk for negative outcomes. A common theme across the study findings was community. Access to health care may not be the driver of the survivorship experience. Findings can influence health care providers and survivorship program development, building on the strengths of both rural and urban living and the engagement of the survivorship community.", "title": "" }, { "docid": "beff5f56387f416f4bd55fde61203200", "text": "Nutrition assessment is an essential component of the Nutrition Care Process and Model (NCPM), as it is the initial step in developing a comprehensive evaluation of the client’s nutrition history. A comprehensive nutrition assessment requires the ability to observe, interpret, analyze, and infer data to diagnose nutrition problems. This practice paper provides insight into the process by which critical thinking skills are utilized by both registered dietitian nutritionists (RDNs) and dietetic technicians, registered (DTRs).", "title": "" }, { "docid": "6d41ec322f71c32195119807f35fde53", "text": "Input current distortion in the vicinity of input voltage zero crossings of boost single-phase power factor corrected (PFC) ac-dc converters is studied in this paper. Previously known causes for the zero-crossing distortion are reviewed and are shown to be inadequate in explaining the observed input current distortion, especially under high ac line frequencies. A simple linear model is then presented which reveals two previously unknown causes for zero-crossing distortion, namely, the leading phase of the input current and the lack of critical damping in the current loop. Theoretical and practical limitations in reducing the phase lead and increasing the damping factor are discussed. A simple phase compensation technique to reduce the zero-crossing distortion is also presented. Numerical simulation and experimental results are presented to validate the theory.", "title": "" }, { "docid": "6f4fe7bc805c4b635d6c201d8ea1f53c", "text": "In this paper we focus on the automatic identification of bird species from their audio recorded song. Bird monitoring is important to perform several tasks, such as to evaluate the quality of their living environment or to monitor dangerous situations to planes caused by birds near airports. We deal with the bird species identification problem using signal processing and machine learning techniques. First, features are extracted from the bird recorded songs using specific audio treatment, next the problem is performed according to a classical machine learning scenario, where a labeled database of previously known bird songs are employed to create a decision procedure that is used to predict the species of a new bird song. Experiments are conducted in a dataset of recorded songs of bird species which appear in a specific region. The experimental results compare the performance obtained in different situations, encompassing the complete audio signals, as recorded in the field, and short audio segments (pulses) obtained from the signals by a split procedure. The influence of the number of classes (bird species) in the identification accuracy is also evaluated.", "title": "" }, { "docid": "a8cdb14a123f12788b5a8a8ca0f5f415", "text": "Medical image data is naturally distributed among clinical institutions. This partitioning, combined with security and privacy restrictions on medical data, imposes limitations on machine learning algorithms in clinical applications, especially for small and newly established institutions. We present InsuLearn: an intuitive and robust open-source (open-source code available at: https://github.com/ DistributedML/InsuLearn) platform designed to facilitate distributed learning (classification and regression) on medical image data, while preserving data security and privacy. InsuLearn is built on ensemble learning, in which statistical models are developed at each institution independently and combined at secure coordinator nodes. InsuLearn protocols are designed such that the liveness of the system is guaranteed as institutions join and leave the network. Coordination is implemented as a cluster of replicated state machines, making it tolerant to individual node failures. We demonstrate that InsuLearn successfully integrates accurate models for horizontally partitioned data while preserving privacy.", "title": "" }, { "docid": "756ea86702a4314fa211afb23c4c63ac", "text": "The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.", "title": "" }, { "docid": "14d77d118aad5ee75b82331dc3db8afd", "text": "Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to rmemorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.", "title": "" }, { "docid": "4535a5961d6628f2f4bafb1d99821bbb", "text": "The prevalence of diabetes has dramatically increased worldwide due to the vast increase in the obesity rate. Diabetic nephropathy is one of the major complications of type 1 and type 2 diabetes and it is currently the leading cause of end-stage renal disease. Hyperglycemia is the driving force for the development of diabetic nephropathy. It is well known that hyperglycemia increases the production of free radicals resulting in oxidative stress. While increases in oxidative stress have been shown to contribute to the development and progression of diabetic nephropathy, the mechanisms by which this occurs are still being investigated. Historically, diabetes was not thought to be an immune disease; however, there is increasing evidence supporting a role for inflammation in type 1 and type 2 diabetes. Inflammatory cells, cytokines, and profibrotic growth factors including transforming growth factor-β (TGF-β), monocyte chemoattractant protein-1 (MCP-1), connective tissue growth factor (CTGF), tumor necrosis factor-α (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-18 (IL-18), and cell adhesion molecules (CAMs) have all been implicated in the pathogenesis of diabetic nephropathy via increased vascular inflammation and fibrosis. The stimulus for the increase in inflammation in diabetes is still under investigation; however, reactive oxygen species are a primary candidate. Thus, targeting oxidative stress-inflammatory cytokine signaling could improve therapeutic options for diabetic nephropathy. The current review will focus on understanding the relationship between oxidative stress and inflammatory cytokines in diabetic nephropathy to help elucidate the question of which comes first in the progression of diabetic nephropathy, oxidative stress, or inflammation.", "title": "" }, { "docid": "ab697c0f7c6a6e0306e8ff00f0c05a8c", "text": "Money laundering is a criminal activity to disguise black money as white money. It is a process by which illegal funds and assets are converted into legitimate funds and assets. Money Laundering occurs in three stages: Placement, Layering, and Integration. It leads to various criminal activities like Political corruption, smuggling, financial frauds, etc. In India there is no successful Anti Money laundering techniques which are available. The Reserve Bank of India (RBI), has issued guidelines to identify the suspicious transactions and send it to Financial Intelligence Unit (FIU). FIU verifies if the transaction is actually suspicious or not. This process is time consuming and not suitable to identify the illegal transactions that occurs in the system. To overcome this problem we propose an efficient Anti Money Laundering technique which can able to identify the traversal path of the Laundered money using Hash based Association approach and successful in identifying agent and integrator in the layering stage of Money Laundering by Graph Theoretic Approach.", "title": "" }, { "docid": "77502699d31b0bb13f6070756054fc2d", "text": "This thesis evaluates the integrated information theory (IIT) by looking at how it may answer some central problems of consciousness that the author thinks any theory of consciousness should be able to explain. The problems concerned are the mind-body problem, the hard problem, the explanatory gap, the binding problem, and the problem of objectively detecting consciousness. The IIT is a computational theory of consciousness thought to explain the rise of consciousness. First the mongrel term consciousness is defined to give a clear idea of what is meant by consciousness in this thesis; followed by a presentation of the IIT, its origin, main ideas, and some implications of the theory. Thereafter the problems of consciousness will be presented, and the explanation the IIT gives will be investigated. In the discussion, some not perviously—in the thesis—discussed issues regarding the theory will be lifted. The author finds the IIT to hold explanations to each of the problems discussed. Whether the explanations are satisfying is questionable. Keywords: integrated information theory, phenomenal consciousness, subjective experience, mind-body, the hard problem, binding, testing 
 AN EVALUATION OF THE IIT !4 Table of Content Introduction 5 Defining Consciousness 6 Introduction to the Integrated Information Theory 8 Historical Background 8 The Approach 9 The Core of the IIT 9 Axioms 11 Postulates 13 The Conscious Mechanism of the IIT 15 Some Key Terms of the IIT 17 The Central Identity of the IIT 19 Some Implications of the IIT 20 Introduction to the Problems of Consciousness 25 The Mind-Body Problem 25 The Hard Problem 27 The Explanatory Gap 28 The Problem With the Problems Above 28 The Binding Problem 30 The Problem of Objectively Detecting Consciousness 31 Evaluation of the IIT Against the Problems of Consciousness 37 The Mind-Body Problem vs. the IIT 38 The Hard Problem vs. the IIT 40 The Explanatory Gap vs. the IIT 42 The Binding Problem vs. the IIT 43 The Problem of Objectively Detecting Consciousness 45 Discussion 50 Conclusion 53 References 54 AN EVALUATION OF THE IIT !5 Introduction Intuitively we like to believe that things which act and behave similarly to ourselves are conscious, things that interact with us on our terms, mimic our facial and bodily expressions, and those that we feel empathy for. But what about things that are superficially different from us, such as other animals and insects, bacteria, groups of people, humanoid robots, the Internet, self-driving cars, smartphones, or grey boxes which show no signs of interaction with their environment? Is it possible that intuition and theory of mind (ToM) may be misleading; that one wrongly associate consciousness with intelligence, human-like behaviour, and ability to react on stimuli? Perhaps we attribute consciousness to things that are not conscious, and that we miss to attribute it to things that really have vivid experiences. To address this question, many theories have been proposed that aim at explaining the emergence of consciousness and to give us tools to identify wherever consciousness may occur. The integrated information theory (IIT) (Tononi, 2004), is one of them. It originates in the dynamic core theory (Tononi & Edelman, 1998) and claims that consciousness is the same as integrated information. While some theories of consciousness only attempt to explain consciousness in neurobiological systems, the IIT is assumed to apply to non-biological systems. Parthemore and Whitby (2014) raise the concern that one may be tempted to reduce consciousness to some quantity X, where X might be e.g. integrated information, neural oscillations (the 40 Hz theory, Crick & Koch, 1990), etc. A system that models one of those theories may prematurely be believed to be conscious argue Parthemore and Whitby (2014). This tendency has been noted among researchers of machine consciousness, of some who have claimed their systems to have achieved at least minimal consciousness (Gamez, 2008a). The aim of this thesis is to take a closer look at the IIT and see how it responds to some of the major problems of consciousness. The focus will be on the mechanisms which AN EVALUATION OF THE IIT !6 the IIT hypothesises gives rise to conscious experience (Oizumi, Albantakis, & Tononi, 2014a), and how it corresponds to those identified by cognitive neurosciences. This thesis begins by offering a working definition of consciousness; that gives a starting point for what we are dealing with. Then it continues with an introduction to the IIT, which is the main focus of this thesis. I have tried to describe the theory in my own words, where some of more complex details not necessary for my argument are left out. I have taken some liberties in adapting the terminology to fit better with what I find elsewhere in cognitive neurosciences and consciousness science avoiding distorting the theory. Thereafter follows the problems of consciousness, which a theory of consciousness, such as IIT, should be able to explain. The problems explored in this thesis are the mind-body problem, the hard problem, the explanatory gap, the binding problem and the problem of objectively detecting consciousness. Each problem is used to evaluate the theory by looking at what explanations the theory is providing. Defining Consciousness What is this thing that is called consciousness and what does it mean to be conscious? Science doesn’t seem to provide with one clear definition of consciousness (Cotterill, 2003; Gardelle & Kouider, 2009; Revonsuo, 2010). When lay people talk about consciousness and being conscious they commonly refer to being attentive and aware and having intentions (Malle, 2009). Both John Searle (1990) and Giulio Tononi (Tononi, 2008, 2012a; Oizumi et al., 2014a) refer to consciousness as the thing that disappears when falling into dreamless sleep, or otherwise become unconscious, and reappears when we wake up or begin to dream. The problem with defining the term consciousness is that it seems to point to many different kinds of phenomena (Block, 1995). In an attempt to point it out and pin it down, the AN EVALUATION OF THE IIT !7 usage of the term needs to be narrowed down to fit the intended purpose. Cognition and neuroscientists alike commonly use terms such as non-conscious, unconscious, awake state, lucid dreaming, etc. which all refer to the subjective experience, but of different degrees, levels, and states (Revonsuo, 2009). Commonly used in discussions regarding consciousness are also terms such as reflective consciousness, self-consciousness, access consciousness, and functional consciousness. Those terms have little to do with the subjective experience per se, at best they describe some of the content of an experience, but mostly refer to observed behaviour (Block, 1995). It seems that researchers of artificial machine consciousness often steer away from the subjective experience. Instead, they focus on the use, the functions, and the expressions of consciousness, as it may be perceived by a third person (Gamez, 2008a). In this thesis, the term consciousness is used for the phenomenon of subjective experience, per se. It is what e.g. differs the awake state from dreamless sleep. It is what differs one’s own conscious thought processes from a regular computer’s nonconscious information processing, or one’s mindful thought from unconscious sensory-motoric control and automatic responses. It is what is lost during anaesthesia and epileptic seizures. Without consciousness, there wouldn’t be “something it is like to be” (Nagel, 1974, p. 436) and there would be no one there to experience the world (Tononi, 2008). Without it we would not experience anything. We would not even regard ourselves to be alive. It is the felt raw experience, even before it is attended to, considered and possible to report, i.e. what Block (1995) refers to as phenomenal consciousness. This is also often the starting point of cognitive and neurological theories of consciousness, which try to explain how experience emerge within the brain by exploring the differences between conscious and nonconscious states and processes. AN EVALUATION OF THE IIT !8 Introduction to the Integrated Information Theory Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Φ is its symbol. A complex is where Φ reaches its maximum, and therein lives one consciousness—a single entity of experience. (Tononi, 2012b, p. 172) Historical Background The integrated information theory originates in the collected ideas of Tononi, Sporns, and Edelman (1992, 1994). In their early collaborative work, they developed a reentering model of visual binding which considered cortico-cortical connections as the basis for integration (Tononi et al., 1992). Two years later they presented a measure hypothesised to describe the neural complexity of functional integration in the brain (Tononi et al., 1994). The ideas of the reentering model and neural complexity measure developed into the more known dynamic core hypothesis (DCH) of the neural substrate of consciousness (Tononi & Edelman, 1998). The thalamocortical pathways played the foundation of sensory modality integration. In the DCH, a measure of integration based on entropy was introduced, which later became Φ, the measurement of integrated information (Tononi & Sporns, 2003). This laid the foundation for the information integration theory of consciousness (Tononi, 2004). The IIT is under constant development and has since it first was presented undergone three major revisions. The latest, at the time of writing, is referred to as version 3.0 (Oizumi et al., 2014a), which this thesis mostly relies on. The basic philosophical and theoretical assumptions have been preserved throughout the development of the theory. Some of the terminology and mathematics have changed between the versions (Oizumi, Amari, Yanagawa, Fujii, & Tsuchiya, 2015). Axioms and p", "title": "" }, { "docid": "532efb1986d7bbfc05c235271d17ac96", "text": "Now a day's Data mining has a lot of e-Commerce applications. The key problem is how to find useful hidden patterns for better business applications in the retail sector. For the solution of these problems, The Apriori algorithm is one of the most popular data mining approach for finding frequent item sets from a transaction dataset and derive association rules. Rules are the discovered knowledge from the data base. Finding frequent item set (item sets with frequency larger than or equal to a user specified minimum support) is not trivial because of its combinatorial explosion. Once frequent item sets are obtained, it is straightforward to generate association rules with confidence larger than or equal to a user specified minimum confidence. The paper illustrating apriori algorithm on simulated database and finds the association rules on different confidence value.", "title": "" }, { "docid": "d9ce90aa11c47e08c10f3a0666521b51", "text": "Static scheduling of a program represented by a directed task graph on a multiprocessor system to minimize the program completion time is a well-known problem in parallel processing. Since finding an optimal schedule is an NP-complete problem in general, researchers have resorted to devising efficient heuristics. A plethora of heuristics have been proposed based on a wide spectrum of techniques, including branch-and-bound, integer-programming, searching, graph-theory, randomization, genetic algorithms, and evolutionary methods. The objective of this survey is to describe various scheduling algorithms and their functionalities in a contrasting fashion as well as examine their relative merits in terms of performance and time-complexity. Since these algorithms are based on diverse assumptions, they differ in their functionalities, and hence are difficult to describe in a unified context. We propose a taxonomy that classifies these algorithms into different categories. We consider 27 scheduling algorithms, with each algorithm explained through an easy-to-understand description followed by an illustrative example to demonstrate its operation. We also outline some of the novel and promising optimization approaches and current research trends in the area. Finally, we give an overview of the software tools that provide scheduling/mapping functionalities.", "title": "" }, { "docid": "b4f06236b0babb6cd049c8914170d7bf", "text": "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.", "title": "" }, { "docid": "b6b58b7a1c5d9112ea24c74539c95950", "text": "We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible.We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.", "title": "" }, { "docid": "6176a2fd4e07d0c72a53c6207af305ca", "text": "At present, Bluetooth Low Energy (BLE) is dominantly used in commercially available Internet of Things (IoT) devices -- such as smart watches, fitness trackers, and smart appliances. Compared to classic Bluetooth, BLE has been simplified in many ways that include its connection establishment, data exchange, and encryption processes. Unfortunately, this simplification comes at a cost. For example, only a star topology is supported in BLE environments and a peripheral (an IoT device) can communicate with only one gateway (e.g. a smartphone, or a BLE hub) at a set time. When a peripheral goes out of range, it loses connectivity to a gateway, and cannot connect and seamlessly communicate with another gateway without user interventions. In other words, BLE connections do not get automatically migrated or handed-off to another gateway. In this paper, we propose a system which brings seamless connectivity to BLE-capable mobile IoT devices in an environment that consists of a network of gateways. Our framework ensures that unmodified, commercial off-the-shelf BLE devices seamlessly and securely connect to a nearby gateway without any user intervention.", "title": "" }, { "docid": "4f74d7e1d7d8a98f0228e0c87c0d85d8", "text": "This paper proposes a novel method for multivehicle detection and tracking using a vehicle-mounted monocular camera. In the proposed method, the features of vehicles are learned as a deformable object model through the combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOGs). The detection algorithm combines both global and local features of the vehicle as a deformable object model. Detected vehicles are tracked through a particle filter, which estimates the particles' likelihood by using a detection scores map and template compatibility for both root and parts of the vehicle while considering the deformation cost caused by the movement of vehicle parts. Tracking likelihoods are iteratively used as a priori probability to generate vehicle hypothesis regions and update the detection threshold to reduce false negatives of the algorithm presented before. Extensive experiments in urban scenarios showed that the proposed method can achieve an average vehicle detection rate of 97% and an average vehicle-tracking rate of 86% with a false positive rate of less than 0.26%.", "title": "" } ]
scidocsrr
2e2e913d9c5cef5fc3e02a5ecc87cc1c
Analysis of Thrust Characteristics Considering Step-Skew and Overhang Effects in Permanent Magnet Linear Synchronous Motor
[ { "docid": "0fcefddfe877b804095838eb9de9581d", "text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.", "title": "" } ]
[ { "docid": "9eb683a1fe85db884e7615222105640d", "text": "OBJECTIVE\nTo evaluate the effect of circumcision on the glans penis sensitivity by comparing the changes of the glans penis vibrotactile threshold between normal men and patients with simple redundant prepuce and among the patients before and after the operation.\n\n\nMETHODS\nThe vibrotactile thresholds were measured at the forefinger and glans penis in 73 normal volunteer controls and 96 patients with simple redundant prepuce before and after circumcision by biological vibration measurement instrument, and the changes in the perception sensitivity of the body surface were analyzed.\n\n\nRESULTS\nThe G/F (glans/finger) indexes in the control and the test group were respectively 2.39 +/- 1.72 and 1.97 +/- 0.71, with no significant difference in between (P > 0.05). And those of the test group were 1.97 +/- 0.71, 2.64 +/- 1.38, 3.09 +/-1.46 and 2.97 +/- 1.20 respectively before and 1, 2 and 3 months after circumcision, with significant difference between pre- and post-operation (P < 0.05).\n\n\nCONCLUSION\nThere is a statistic difference in the glans penis vibration perception threshold between normal men and patients with simple redundant prepuce. The glans penis perception sensitivity decreases after circumcision.", "title": "" }, { "docid": "aa46e7ffcff4bdb6e3cff97b741d7884", "text": "A fake news detection system aims to assist users in detecting and filtering out varieties of potentially deceptive news. The prediction of the chances that a particular news item is intentionally deceptive is based on the analysis of previously seen truthful and deceptive news. A scarcity of deceptive news, available as corpora for predictive modeling, is a major stumbling block in this field of natural language processing (NLP) and deception detection. This paper discusses three types of fake news, each in contrast to genuine serious reporting, and weighs their pros and cons as a corpus for text analytics and predictive modeling. Filtering, vetting, and verifying online information continues to be essential in library and information science (LIS), as the lines between traditional news and online information are blurring.", "title": "" }, { "docid": "83355e7d2db67e42ec86f81909cfe8c1", "text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.", "title": "" }, { "docid": "2b71f722dad9301c20658656d3949a05", "text": "This paper presents a wideband GaN Doherty power amplifier (DPA) in the 2.6 GHz band with a 3 stages Wilkinson splitter where the impedance inverter is based on a tapered line instead of a quarter wave transformer. From the comparison with the same amplifier using a typical quarter-wave transformer as impedance inverter, and keeping both designs the same bandwidth (1.6 to 3.3 GHz, 65% of the fractional bandwidth), the Power Added Efficiency (PAE) is enhanced up to 50% (from 40.46% to 61.17% at 35 dBm input power) in the DPA with a tapered line. Besides, the transducer gain remains practically unaltered.", "title": "" }, { "docid": "4dc8b11b9123c6a25dcf4765d77cb6ca", "text": "Accurate and reliable information about land use and land cover is essential for change detection and monitoring of the specified area. It is also useful in the updating the geographical information about the area. Over the past decade, a significant amount of research has been conducted concerning the application of different classifier and image fusion technique in this area. In this paper, introductions to the land use and land cover classification techniques are given and the results from a number of different techniques are compared. It has been found that, in general fusion technique perform better than either conventional classifier or supervised/unsupervised classification.", "title": "" }, { "docid": "c9a8587ea80bc4c444dcfe98844c5049", "text": "Dealing with multiple labels is a supervised learning problem of increasing importance. However, in some tasks, certain learning algorithms produce a confidence score vector for each label that needs to be classified as relevant or irrelevant. More importantly, multi-label models are learnt in training conditions called operating conditions, which most likely change in other contexts. In this work, we explore the existing thresholding methods of multi-label classification by considering that label costs are operating conditions. This paper provides an empirical comparative study of these approaches by calculating the empirical loss over range of operating conditions. It also contributes two new methods in multilabel classification that have been used in binary classification: score-driven and one optimal.", "title": "" }, { "docid": "147fe3608cef38d9775c7b0e2bc24bf6", "text": "With the advent of online social networks, recommender systems have became crucial for the success of many online applications/services due to their significance role in tailoring these applications to user-specific needs or preferences. Despite their increasing popularity, in general, recommender systems suffer from data sparsity and cold-start problems. To alleviate these issues, in recent years, there has been an upsurge of interest in exploiting social information such as trust relations among users along with the rating data to improve the performance of recommender systems. The main motivation for exploiting trust information in the recommendation process stems from the observation that the ideas we are exposed to and the choices we make are significantly influenced by our social context. However, in large user communities, in addition to trust relations, distrust relations also exist between users. For instance, in Epinions, the concepts of personal “web of trust” and personal “block list” allow users to categorize their friends based on the quality of reviews into trusted and distrusted friends, respectively. Hence, it will be interesting to incorporate this new source of information in recommendation as well. In contrast to the incorporation of trust information in recommendation which is thriving, the potential of explicitly incorporating distrust relations is almost unexplored. In this article, we propose a matrix factorization-based model for recommendation in social rating networks that properly incorporates both trust and distrust relationships aiming to improve the quality of recommendations and mitigate the data sparsity and cold-start users issues. Through experiments on the Epinions dataset, we show that our new algorithm outperforms its standard trust-enhanced or distrust-enhanced counterparts with respect to accuracy, thereby demonstrating the positive effect that incorporation of explicit distrust information can have on recommender systems.", "title": "" }, { "docid": "4b1ba99581d537fcbbe291d74b8f23f3", "text": "To put the concept of lean software development in context, it's useful to point out similarities and differences with agile software development. Agile development methods have generally expected system architecture and interaction design to occur outside the development team, or to occur in very small increments within the team. Because of this, agile practices often prove to be insufficient in addressing issues of solution design, user interaction design, and high-level system architecture. Increasingly, agile development practices are being thought of as good ways to organize software development, but insufficient ways to address design. Because design is fundamentally iterative and development is fundamentally iterative, the two disciplines suffer if they are not carefully integrated with each other. Because lean development lays out a set of principles that demand a whole-product, complete life-cycle, cross-functional approach, it's the more likely candidate to guide the combination of design, development, deployment, and validation into a single feedback loop focused on the discovery and delivery of value.", "title": "" }, { "docid": "89dcd15d3f7e2f538af4a2654f144dfb", "text": "E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.", "title": "" }, { "docid": "a696fd5e0328b27d8d952bdadfd6f58c", "text": "Aiming at the problem of low speed of 3D reconstruction of indoor scenes with monocular vision, the color images and depth images of indoor scenes based on ASUS Xtion monocular vision sensor were used for 3D reconstruction. The image feature extraction using the ORB feature detection algorithm, and compared the efficiency of several kinds of classic feature detection algorithm in image matching, Ransac algorithm and ICP algorithm are used to point cloud fusion. Through experiments, a fast 3D reconstruction method for indoor, simple and small-scale static environment is realized. Have good accuracy, robustness, real-time and flexibility.", "title": "" }, { "docid": "e4c1917080ca47fb8d5eb519dbb1d576", "text": "Traditional approaches to community detection, as studied by physicists, sociologists, and more recently computer scientists, aim at simply partitioning the social network graph. However, with the advent of online social networking sites, richer data has become available: beyond the link information, each user in the network is annotated with additional information, for example, demographics, shopping behavior, or interests. In this context, it is therefore important to develop mining methods which can take advantage of all available information. In the case of community detection, this means finding good communities (a set of nodes cohesive in the social graph) which are associated with good descriptions in terms of user information (node attributes).\n Having good descriptions associated to our models make them understandable by domain experts and thus more useful in real-world applications. Another requirement dictated by real-world applications, is to develop methods that can use, when available, any domain-specific background knowledge. In the case of community detection the background knowledge could be a vague description of the communities sought in a specific application, or some prototypical nodes (e.g., good customers in the past), that represent what the analyst is looking for (a community of similar users).\n Towards this goal, in this article, we define and study the problem of finding a diverse set of cohesive communities with concise descriptions. We propose an effective algorithm that alternates between two phases: a hill-climbing phase producing (possibly overlapping) communities, and a description induction phase which uses techniques from supervised pattern set mining. Our framework has the nice feature of being able to build well-described cohesive communities starting from any given description or seed set of nodes, which makes it very flexible and easily applicable in real-world applications.\n Our experimental evaluation confirms that the proposed method discovers cohesive communities with concise descriptions in realistic and large online social networks such as Delicious, Flickr, and LastFM.", "title": "" }, { "docid": "d7cd6978cfb8ef53567c3aab3c71d274", "text": "s computing technology increasingly becomes part of our daily activities, we are required to consider what is the future of computing and how will it change our lives? To address this question, we are interested in developing technologies that would allow for ubiquitous sensing and recognition of daily activities in an environment. Such environments will be aware of the activities performed within it and will be capable of supporting these activities without increasing the cognitive load on the users in the space. Toward this end, we are prototyping different types of smart and aware spaces, each supporting a different part of our daily life and each varying in function and detail. Our most significant effort in this direction is the building of the \" Aware Home \" at Georgia Tech. In this article, we outline the research issues we are pursuing toward the building of such smart and aware environments , and especially the Aware Home. We are interested in developing an infrastructure for ubiquitous sensing and recognition of activities in environments. Such sensing will be transparent to everyday activities, while providing the embedded computing infrastructure with an awareness of what is happening in a space. We expect such a ubiquitous sensing infrastructure to support different environments, with varying needs and complexities. These sensors can be mobile or static, configuring their sensing to suit the task at hand while sharing relevant information with other available sensors. This config-urable sensor-net will provide high-end sensory data about the status of the environment, its inhabitants, and the ongoing activities in the environment. To achieve this contextual knowledge of the space that is being sensed and to model the environment and the people within it requires methods for both low-level and high-level signal processing and interpretation. We are also building such signal-understanding methods to process the sensory data captured from these sensors and to model and recognize the space and activities in them. A significant aspect of building an aware environment is to explore easily accessible and more pervasive computing services than are available via traditional desktop computing. Computing and sensing in such environments must be reliable , persistent (always remains on), easy to interact with, and transparent (the user does not know it is there and does not need to search for it). The environment must be aware of the users it is interacting with and be capable of unencumbered and intelligent interaction. …", "title": "" }, { "docid": "4f57356f2431778c1cf6bd4d2119d91e", "text": "This chapter discusses the construction of kernel functions between labeled graphs. We provide a unified account of a family of kernels called label sequence kernels that are defined via label sequences generated by graph traversal. For cyclic graphs, dynamic programming techniques cannot simply be applied, because the kernel is based on an infinite dimensional feature space. We show that the kernel computation boils down to obtaining the stationary state of a discrete-time linear system, which is efficiently performed by solving simultaneous linear equations. Promising empirical results are presented in classification of chemical compounds.", "title": "" }, { "docid": "fa2c86d4c0716580415fce8db324fd04", "text": "One of the key elements in describing a software development method is the roles that are assigned to the members of the software team. This article describes our experience in assigning roles to students who are involved in the development of software projects, working in Extreme Programming teams. This experience, which is based on 25 such projects, teaches us that a personal role for each teammate increases personal responsibility while maintaining the essence of the software development method. In this paper we discuss ways in which different software development methods address the place of roles in a software development team. We also share our experience in refining role specifications and suggest a way to achieve and measure progress by using the perspective of the different roles.", "title": "" }, { "docid": "b9aa32d69dc7bcb02ff940285f87c5f0", "text": "In this paper, we propose a novel robust and pragmatic feature selection approach. Unlike those sparse learning based feature selection methods which tackle the approximate problem by imposing sparsity regularization in the objective function, the proposed method only has one `2,1-norm loss term with an explicit `2,0-Norm equality constraint. An efficient algorithm based on augmented Lagrangian method will be derived to solve the above constrained optimization problem to find out the stable local solution. Extensive experiments on four biological datasets show that although our proposed model is not a convex problem, it outperforms the approximate convex counterparts and state-ofart feature selection methods evaluated in terms of classification accuracy by two popular classifiers. What is more, since the regularization parameter of our method has the explicit meaning, i.e. the number of feature selected, it avoids the burden of tuning the parameter, making it a pragmatic feature selection method.", "title": "" }, { "docid": "595cb7698c38b9f5b189ded9d270fe69", "text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.", "title": "" }, { "docid": "2ff3238a25fd7055517a2596e5e0cd7c", "text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "title": "" }, { "docid": "7793d5173ac66b6c851234a896d7c3ea", "text": "OBJECTIVE\nThe aim of this trial was to investigate the effectiveness of a worksite intervention using kettlebell training to improve musculoskeletal and cardiovascular health.\n\n\nMETHODS\nThis single-blind randomized controlled trial involved 40 adults from occupations with a high prevalence of reported musculoskeletal pain symptoms (mean age 44 years, body mass index 23 kg/m², 85% women, with pain intensity of the neck/shoulders 3.5 and of the low back 2.8 on a scale of 0-10). A blinded assessor took measures at baseline and follow-up. Participants were randomly assigned to training--consisting of ballistic full-body kettlebell exercise 3 times per week for 8 weeks--or a control group. The main outcome measures were pain intensity of the neck/shoulders and low back, isometric muscle strength, and aerobic fitness.\n\n\nRESULTS\nCompared with the control group, pain intensity of the neck/shoulders decreased 2.1 points [95% confidence interval (95% CI) -3.7- -0.4] and pain intensity of the low back decreased 1.4 points (95% CI -2.7- -0.02) in the training group. Compared with the control group, the training group increased muscle strength of the trunk extensors (P<0.001), but not of the trunk flexors and shoulders. Aerobic fitness remained unchanged.\n\n\nCONCLUSIONS\nWorksite intervention using kettlebell training reduces pain in the neck/shoulders and low back and improves muscle strength of the low back among adults from occupations with a high prevalence of reported musculoskeletal pain symptoms. This type of training does not appear to improve aerobic fitness.", "title": "" }, { "docid": "401a2589d97d147757505a7675998929", "text": "This paper reviews the historic of ChaLearn Looking at People (LAP) events. We started in 2011 (with the release of the first Kinect device) to run challenges related to human action/activity and gesture recognition. Since then we have regularly organized events in a series of competitions covering all aspects of visual analysis of humans. So far we have organized more than 10 international challenges and events in this field. This paper reviews associated events, and introduces the ChaLearn LAP platform where public resources (including code, data and preprints of papers) related to the organized events are available. We also provide a discussion on perspectives of ChaLearn LAP activities.", "title": "" }, { "docid": "c426832d19409bd842ca98ffef212cb5", "text": "Cybersecurity is among the highest priorities in industries, academia and governments. Cyber-threats information sharing among different organizations has the potential to maximize vulnerabilities discovery at a minimum cost. Cyber-threats information sharing has several advantages. First, it diminishes the chance that an attacker exploits the same vulnerability to launch multiple attacks in different organizations. Second, it reduces the likelihood an attacker can compromise an organization and collect data that will help him launch an attack on other organizations. Cyberspace has numerous interconnections and critical infrastructure owners are dependent on each other's service. This well-known problem of cyber interdependency is aggravated in a public cloud computing platform. The collaborative effort of organizations in developing a countermeasure for a cyber-breach reduces each firm's cost of investment in cyber defense. Despite its multiple advantages, there are costs and risks associated with cyber-threats information sharing. When a firm shares its vulnerabilities with others there is a risk that these vulnerabilities are leaked to the public (or to attackers) resulting in loss of reputation, market share and revenue. Therefore, in this strategic environment the firms committed to share cyber-threats information might not truthfully share information due to their own self-interests. Moreover, some firms acting selfishly may rationally limit their cybersecurity investment and rely on information shared by others to protect themselves. This can result in under investment in cybersecurity if all participants adopt the same strategy. This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information. We will apply our algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.", "title": "" } ]
scidocsrr
4a84d186f4fb143ab359152afa4fad95
Personality and IT security: An application of the five-factor model
[ { "docid": "7067fbd4d551320c9054b2b258ea4e8f", "text": "Until the era of the information society, information was a concern mainly for organizations whose line of business demanded a high degree of security. However, the growing use of information technology is affecting the status of information security so that it is gradually becoming an area that plays an important role in our everyday lives. As a result, information security issues should now be regarded on a par with other security issues. Using this assertion as the point of departure, this paper outlines the dimensions of information security awareness, namely its organizational, general public, socio-political, computer ethical and institutional education dimensions, along with the categories (or target groups) within each dimension.", "title": "" }, { "docid": "6a1845e3fd4bfe1f3cf2e96b5bfddb72", "text": "The likelihood that the firm’s information systems are insufficiently protected against certain kinds of damage or loss is known as “systems risk.” Risk can be managed or reduced when managers are aware of the full range of controls available and implement the most effective controls. Unfortunately, they often lack this knowledge and their subsequent actions to cope with systems risk are less effective than they might otherwise be. This is one viable explanation for why losses from computer abuse and computer disasters today are uncomfortably large and still so potentially devastating after many years of attempting to deal with the problem. Results of comparative qualitative studies in two information services Fortune 500 firms identify an approach that can effectively deal with the problem. This theory-based security program includes: (1) use of a security risk planning model, (2) education/training in security awareness, and (3) Countermeasure Matrix analysis.", "title": "" } ]
[ { "docid": "74a35a3ca34a8f07c773de642175094c", "text": "We deal with the problem of ranking news events on a daily basis for large news corpora, an essential building block for news aggregation. News ranking has been addressed in the literature before but with individual news articles as the unit of ranking. However, estimating event importance accurately requires models to quantify current day event importance as well as its significance in the historical context. Consequently, in this paper we show that a cluster of news articles representing an event is a better unit of ranking as it provides an improved estimation of popularity, source diversity and authority cues. In addition, events facilitate quantifying their historical significance by linking them with long-running topics and recent chain of events. Our main contribution in this paper is to provide effective models for improved news event ranking.\n To this end, we propose novel event mining and feature generation approaches for improving estimates of event importance. Finally, we conduct extensive evaluation of our approaches on two large real-world news corpora each of which span for more than a year with a large volume of up to tens of thousands of daily news articles. Our evaluations are large-scale and based on a clean human curated ground-truth from Wikipedia Current Events Portal. Experimental comparison with a state-of-the-art news ranking technique based on language models demonstrates the effectiveness of our approach.", "title": "" }, { "docid": "42e4f07ccb9673b32d7c2368cc013eac", "text": "This paper proposes a framework to aid video analysts in detecting suspicious activity within the tremendous amounts of video data that exists in today's world of omnipresent surveillance video. Ideas and techniques for closing the semantic gap between low-level machine readable features of video data and high-level events seen by a human observer are discussed. An evaluation of the event classification and diction technique is presented and future an experiment to refine this technique is proposed. These experiments are used as a lead to a discussion on the most optimal machine learning algorithm to learn the event representation scheme proposed in this paper.", "title": "" }, { "docid": "189cc09c72686ae7282eef04c1b365f1", "text": "With the rapid growth of the internet as well as increasingly more accessible mobile devices, the amount of information being generated each day is enormous. We have many popular websites such as Yelp, TripAdvisor, Grubhub etc. that offer user ratings and reviews for different restaurants in the world. In most cases, though, the user is just interested in a small subset of the available information, enough to get a general overview of the restaurant and its popular dishes. In this paper, we present a way to mine user reviews to suggest popular dishes for each restaurant. Specifically, we propose a method that extracts and categorize dishes from Yelp restaurant reviews, and then ranks them to recommend the most popular dishes.", "title": "" }, { "docid": "0a59f6399a4e1afe1ce635a8edbf4b14", "text": "With the impact of climate change in India, majority of the agricultural crops are being badly affected interms of their performance over a period of last two decades. Predicting the crop yield well ahead of its harvest would help the policy makers and farmers for taking appropriate measures for marketing and storage. Such predictions will also help the associated industries for planning the logistics of their business. Several methods of predicting and modeling crop yields have been developed in the past with varying rateof success, as these don't take into account characteristicsoftheweather, a n d aremostly empirical. In the present study a software tool named `Crop Advisor' has been developed as an user friendly web page for predicting the influence of climatic parameters on the crop yields.C4.5 algorithm is used to find out the most influencing climatic parameter on the crop yields of selected crops in selected districts of Madhya Pradesh. This software provides an indication of relative influence of different climatic parameters on the crop yield, other agro-input parameters responsible for crop yield are not considered in this tool, since, application of these input parameters varies with individual fields in space and time.", "title": "" }, { "docid": "49b0842c9b7e6627b12faa1b821d4c19", "text": "Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques – guided backpropagation and occlusion – to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.", "title": "" }, { "docid": "b4fe12100bae66ad460f5b9987332cf1", "text": "We present a deep-learning framework for real-time multiple spatio-temporal (S/T) action localisation and classification. Current state-of-the-art approaches work offline, and are too slow to be useful in real-world settings. To overcome their limitations we introduce two major developments. Firstly, we adopt real-time SSD (Single Shot Multi-Box Detector) CNNs to regress and classify detection boxes in each video frame potentially containing an action of interest. Secondly, we design an original and efficient online algorithm to incrementally construct and label ‘action tubes’ from the SSD frame level detections. As a result, our system is not only capable of performing S/T detection in real time, but can also perform early action prediction in an online fashion. We achieve new state-of-the-art results in both S/T action localisation and early action prediction on the challenging UCF101-24 and J-HMDB-21 benchmarks, even when compared to the top offline competitors. To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S/T action localisation on the untrimmed videos of UCF101-24.", "title": "" }, { "docid": "47aee90be18e5f2b906d97c67f6016e7", "text": "Embedded VLC (Visible Light Communication) has attracted significant research attention in recent years. A reliable and robust VLC system can become one of the IoT communication technologies for indoor environment. VLC could become a wireless technology complementary to existing RF-based technology but with no RF interference. However, existing low cost LED based VLC platforms have limited throughput and reliability. In this work, we introduce Purple VLC: a new embedded VLC platform that can achieve 100 kbps aggregate throughput at a distance of 6 meters, which is 6-7x improvement over state-of-the-art. Our design combines I/O offloading in computation, concurrent communication with polarized light, and full-duplexing to offer more than 99% link reliability at a distance of 6 meters.", "title": "" }, { "docid": "8e3090a0fed3e599ac8c9b5430535671", "text": "A wide variety of predictive analytics techniques have been developed in statistics, machine learning and data mining; however, many of these algorithms take a black-box approach in which data is input and future predictions are output with no insight into what goes on during the process. Unfortunately, such a closed system approach often leaves little room for injecting domain expertise and can result in frustration from analysts when results seem spurious or confusing. In order to allow for more human-centric approaches, the visualization community has begun developing methods to enable users to incorporate expert knowledge into the prediction process at all stages, including data cleaning, feature selection, model building and model validation. This paper surveys current progress and trends in predictive visual analytics, identifies the common framework in which predictive visual analytics systems operate, and develops a summarization of the predictive analytics workflow.", "title": "" }, { "docid": "f0e143229e788ab03637e72cfb0bf1d8", "text": "Solid waste management is a key aspect of the environmental management of establishments belonging to the hospitality sector. In this study, we reviewed literature in this area, examining the current status of waste management for the hospitality sector, in general, with a focus on food waste management in particular. We specifically examined the for-profit subdivision of the hospitality sector, comprising primarily of hotels and restaurants. An account is given of the causes of the different types of waste encountered in this sector and what strategies may be used to reduce them. These strategies are further highlighted in terms of initiatives and practices which are already being implemented around the world to facilitate sustainable waste management. We also recommended a general waste management procedure to be followed by properties of the hospitality sector and described how waste mapping, an innovative yet simple strategy, can significantly reduce the waste generation of a hotel. Generally, we found that not many scholarly publications are available in this area of research. More studies need to be carried out on the implementation of sustainable waste management for the hospitality industry in different parts of the world and the challenges and opportunities involved.", "title": "" }, { "docid": "305f385c343a89e566aa13634964992d", "text": "Trend-following (TF) strategies use fixed trading mechanism in order to take advantages from the long-term market moves without regards to the past price performance.In contrast with most prediction tools that stemmed from soft computing such as neural networks to predict a future trend, TF just rides on the current trend pattern to decide on buying or selling. While TF is widely applied in currency markets with a good track record for major currency pairs [1], it is doubtful that if TF can be applied in stock market. In this paper a new TF model that features both strategies of evaluating the trend by static and adaptive rules, is created from simulations and later verified on Hong Kong Hang Seng future indices. The model assesses trend profitability from the statistical features of the return distribution of the asset under consideration. The results and examples facilitate some insights on the merits of using the trend following model.", "title": "" }, { "docid": "d761b2718cfcabe37b72768962492844", "text": "In the most recent years, wireless communication networks have been facing a rapidly increasing demand for mobile traffic along with the evolvement of applications that require data rates of several 10s of Gbit/s. In order to enable the transmission of such high data rates, two approaches are possible in principle. The first one is aiming at systems operating with moderate bandwidths at 60 GHz, for example, where 7 GHz spectrum is dedicated to mobile services worldwide. However, in order to reach the targeted date rates, systems with high spectral efficiencies beyond 10 bit/s/Hz have to be developed, which will be very challenging. A second approach adopts moderate spectral efficiencies and requires ultra high bandwidths beyond 20 GHz. Such an amount of unregulated spectrum can be identified only in the THz frequency range, i.e. beyond 300 GHz. Systems operated at those frequencies are referred to as THz communication systems. The technology enabling small integrated transceivers with highly directive, steerable antennas becomes the key challenges at THz frequencies in face of the very high path losses. This paper gives an overview over THz communications, summarizing current research projects, spectrum regulations and ongoing standardization activities.", "title": "" }, { "docid": "b5ca5d8ee536160c293ca52a2f3c4db2", "text": "We present a neural network based shiftreduce CCG parser, the first neural-network based parser for CCG. We also study the impact of neural network based tagging models, and greedy versus beam-search parsing, by using a structured neural network model. Our greedy parser obtains a labeled F-score of 83.27%, the best reported result for greedy CCG parsing in the literature (an improvement of 2.5% over a perceptron based greedy parser) and is more than three times faster. With a beam, our structured neural network model gives a labeled F-score of 85.57% which is 0.6% better than the perceptron based counterpart.", "title": "" }, { "docid": "02c56bf47e680b456b88e7800cb61f8d", "text": "Deep Neural Networks (DNNs) have demonstrated state-of-the-art performance on a broad range of tasks involving natural language, speech, image, and video processing, and are deployed in many real world applications. However, DNNs impose significant computational challenges owing to the complexity of the networks and the amount of data they process, both of which are projected to grow in the future. To improve the efficiency of DNNs, we propose ScaleDeep, a dense, scalable server architecture, whose processing, memory and interconnect subsystems are specialized to leverage the compute and communication characteristics of DNNs. While several DNN accelerator designs have been proposed in recent years, the key difference is that ScaleDeep primarily targets DNN training, as opposed to only inference or evaluation. The key architectural features from which ScaleDeep derives its efficiency are: (i) heterogeneous processing tiles and chips to match the wide diversity in computational characteristics (FLOPs and Bytes/FLOP ratio) that manifest at different levels of granularity in DNNs, (ii) a memory hierarchy and 3-tiered interconnect topology that is suited to the memory access and communication patterns in DNNs, (iii) a low-overhead synchronization mechanism based on hardware data-flow trackers, and (iv) methods to map DNNs to the proposed architecture that minimize data movement and improve core utilization through nested pipelining. We have developed a compiler to allow any DNN topology to be programmed onto ScaleDeep, and a detailed architectural simulator to estimate performance and energy. The simulator incorporates timing and power models of ScaleDeep's components based on synthesis to Intel's 14nm technology. We evaluate an embodiment of ScaleDeep with 7032 processing tiles that operates at 600 MHz and has a peak performance of 680 TFLOPs (single precision) and 1.35 PFLOPs (half-precision) at 1.4KW. Across 11 state-of-the-art DNNs containing 0.65M-14.9M neurons and 6.8M-145.9M weights, including winners from 5 years of the ImageNet competition, ScaleDeep demonstrates 6x-28x speedup at iso-power over the state-of-the-art performance on GPUs.", "title": "" }, { "docid": "0576c4553dbfc2bbbe0e0d88afb890b3", "text": "This review covers the toxicology of mercury and its compounds. Special attention is paid to those forms of mercury of current public health concern. Human exposure to the vapor of metallic mercury dates back to antiquity but continues today in occupational settings and from dental amalgam. Health risks from methylmercury in edible tissues of fish have been the subject of several large epidemiological investigations and continue to be the subject of intense debate. Ethylmercury in the form of a preservative, thimerosal, added to certain vaccines, is the most recent form of mercury that has become a public health concern. The review leads to general discussion of evolutionary aspects of mercury, protective and toxic mechanisms, and ends on a note that mercury is still an \"element of mystery.\"", "title": "" }, { "docid": "3bee9a2d5f9e328bb07c3c76c80612fa", "text": "In this paper, we construct a complexity-based morphospace wherein one can study systems-level properties of conscious and intelligent systems based on information-theoretic measures. The axes of this space labels three distinct complexity types, necessary to classify conscious machines, namely, autonomous, cognitive and social complexity. In particular, we use this morphospace to compare biologically conscious agents ranging from bacteria, bees, C. elegans, primates and humans with artificially intelligence systems such as deep networks, multi-agent systems, social robots, AI applications such as Siri and computational systems as Watson. Given recent proposals to synthesize consciousness, a generic complexitybased conceptualization provides a useful framework for identifying defining features of distinct classes of conscious and synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, this article takes a perspective on how contemporary artificially intelligent machines and synthetically engineered life forms would measure on these scales. It turns out that awareness and wakefulness can be associated to computational and autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, and argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity necessary for describing consciousness, namely, social complexity. Having identified these complexity types, allows for a representation of both, biological and synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. In particular, we identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), and (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the ar X iv :1 70 5. 11 19 0v 3 [ qbi o. N C ] 2 4 N ov 2 01 8 The Morphospace of Consciousness 2 crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence and biomimetics.", "title": "" }, { "docid": "baa14d5bf6e457487d3630f34b3818d1", "text": "This paper is focused on modelling and control of nonlinear dynamical system Ball & Plate in language Matlab/Simulink. PID/PSD controller is used in closed loop feedback control structure for the purpose of control. The verification of designed PID control algorithms, the same as nonlinear model of dynamical system, is performed with functional blocks of Simulink environment. This paper includes testing of designed PID control algorithms on real model Ball & Plate using multifunction I/O card MF 614, which communicates with PC by the functions of Real Time Toolbox. Visualization of the simulation results is realized by internet applications, which use Matlab Web Server.", "title": "" }, { "docid": "8054bf47593fa139cb9e4c14e336818e", "text": "This paper provides a framework for evaluating healthcare software from a usability perspective. The framework is based on a review of both the healthcare software literature and the general literature on software usability and evaluation. The need for such a framework arises from the proliferation of software packages in the healthcare field, and from an historical focus on the technical and functional aspects, rather than on the usability, of these packages. Healthcare managers are generally unfamiliar with usability concepts, even though usability differences among software can play a significant role in the acceptance and effectiveness of systems. Six major areas of usability are described, and specific criteria which can be used in the software evaluation process are also presented.", "title": "" }, { "docid": "2139c4ffeb8b20e333c1e06b462760ff", "text": "BACKGROUND\nDental esthetics has become a popular topic among all disciplines in dentistry. When a makeover is planned for the esthetic appearance of a patient's teeth, the clinician must have a logical diagnostic approach that results in the appropriate treatment plan. With some patients, the restorative dentist cannot accomplish the correction alone but may require the assistance of other dental disciplines.\n\n\nAPPROACH\nThis article describes an interdisciplinary approach to the diagnosis and management of anterior dental esthetics. The authors practice different disciplines in dentistry: restorative care, orthodontics and periodontics. However, for more than 20 years, this team has participated in an interdisciplinary dental study group that focuses on a wide variety of dental problems. One such area has been the analysis of anterior dental esthetic problems requiring interdisciplinary correction. This article will describe a unique approach to interdisciplinary dental diagnosis, beginning with esthetics but encompassing structure, function and biology to achieve an optimal result.\n\n\nCLINICAL IMPLICATIONS\nIf a clinician uses an esthetically based approach to the diagnosis of anterior dental problems, then the outcome of the esthetic treatment plan will be enhanced without sacrificing the structural, functional and biological aspects of the patient's dentition.", "title": "" }, { "docid": "b484eff8a6a0de41089ba43993aeadd4", "text": "Adversarial perturbations can pose a serious threat for deploying machine learning systems. Recent works have shown existence of image-agnostic perturbations that can fool classifiers over most natural images. Existing methods present optimization approaches that solve for a fooling objective with an imperceptibility constraint to craft the perturbations. However, for a given classifier, they generate one perturbation at a time, which is a single instance from the manifold of adversarial perturbations. Also, in order to build robust models, it is essential to explore the manifold of adversarial perturbations. In this paper, we propose for the first time, a generative approach to model the distribution of adversarial perturbations. The architecture of the proposed model is inspired from that of GANs and is trained using fooling and diversity objectives. Our trained generator network attempts to capture the distribution of adversarial perturbations for a given classifier and readily generates a wide variety of such perturbations. Our experimental evaluation demonstrates that perturbations crafted by our model (i) achieve state-of-the-art fooling rates, (ii) exhibit wide variety and (iii) deliver excellent cross model generalizability. Our work can be deemed as an important step in the process of inferring about the complex manifolds of adversarial perturbations.", "title": "" } ]
scidocsrr
2e39740cf430036f4760824a1abd7b6d
A Local Density Based Spatial Clustering Algorithm with Noise
[ { "docid": "44c0237251d54d6ccccd883bf14c6ff6", "text": "In this paper, we propose a new method for indexing large amounts of point and spatial data in highdimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap. Our experiments show that for high-dimensional data, the X-tree outperforms the well-known R*-tree and the TV-tree by up to two orders of magnitude.", "title": "" } ]
[ { "docid": "d552b6beeea587bc014a4c31cabee121", "text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.", "title": "" }, { "docid": "638518ce174b79f61f7d9f6ad71bc56c", "text": "In this paper, synthesis and design techniques of dual- and triple-passband filters with Chebyshev and quasi-elliptic symmetric frequency responses are proposed and demonstrated for the first time on the basis of substrate integrated waveguide technology. The inverter coupled resonator section is first investigated, and then a dual-passband Chebyshev filter, a triple-passband Chebyshev filter, and a dual-passband quasi-elliptic filter, which consist of the inverter coupled resonator sections, are synthesized from the generalized low-pass prototypes having Chebyshev or quasi-elliptic responses, respectively. Subsequently, theses filters with a symmetric response are designed and implemented using the substrate integrated waveguide scheme over the -band frequency range. The inverter coupled resonator sections composed of side-by-side horizontally oriented substrate integrated waveguide cavities are coupled, in turn, by post-wall irises. 50-Omega microstrip lines are used to directly excite the filters. Measured results are presented and compared to those simulated by Ansoft's High Frequency Structure Simulator (HFSS) software package. A good agreement between the simulated and measured results is observed, which has also validated the proposed concept of design and synthesis with the substrate integration technology.", "title": "" }, { "docid": "9f6f22e320b91838c9be8f56d3f0564d", "text": "We present an approach for ontology population from natural language English texts that extracts RDF triples according to FrameBase, a Semantic Web ontology derived from FrameNet. Processing is decoupled in two independently-tunable phases. First, text is processed by several NLP tasks, including Semantic Role Labeling (SRL), whose results are integrated in an RDF graph of mentions, i.e., snippets of text denoting some entity/fact. Then, the mention graph is processed with SPARQL-like rules using a specifically created mapping resource from NomBank/PropBank/FrameNet annotations to FrameBase concepts, producing a knowledge graph whose content is linked to DBpedia and organized around semantic frames, i.e., prototypical descriptions of events and situations. A single RDF/OWL representation is used where each triple is related to the mentions/tools it comes from. We implemented the approach in PIKES, an open source tool that combines two complementary SRL systems and provides a working online demo. We evaluated PIKES on a manually annotated gold standard, assessing precision/recall in (i) populating FrameBase ontology, and (ii) extracting semantic frames modeled after standard predicate models, for comparison with state-of-the-art tools for the Semantic Web. We also evaluated (iii) sampled precision and execution times on a large corpus of 110 K Wikipedia-like pages.", "title": "" }, { "docid": "7de050ef4260ad858a620f9aa773b5a7", "text": "We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer standing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today’s VM algorithms consider the impact of single deltas on view queries to produce maintenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improvements in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including PostgreSQL, HSQLDB, a commercial DBMS ’A’, the Stanford STREAM engine, and a commercial stream processor ’B’.", "title": "" }, { "docid": "d1d1b85b0675c59f01c61c6f144ee8a7", "text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.", "title": "" }, { "docid": "25dcc8e71b878bfed01e95160d9b82ef", "text": "Wireless Sensor Networks (WSN) has been a focus for research for several years. WSN enables novel and attractive solutions for information gathering across the spectrum of endeavour including transportation, business, health-care, industrial automation, and environmental monitoring. Despite these advances, the exponentially increasing data extracted from WSN is not getting adequate use due to the lack of expertise, time and money with which the data might be better explored and stored for future use. The next generation of WSN will benefit when sensor data is added to blogs, virtual communities, and social network applications. This transformation of data derived from sensor networks into a valuable resource for information hungry applications will benefit from techniques being developed for the emerging Cloud Computing technologies. Traditional High Performance Computing approaches may be replaced or find a place in data manipulation prior to the data being moved into the Cloud. In this paper, a novel framework is proposed to integrate the Cloud Computing model with WSN. Deployed WSN will be connected to the proposed infrastructure. Users request will be served via three service layers (IaaS, PaaS, SaaS) either from the archive, archive is made by collecting data periodically from WSN to Data Centres (DC), or by generating live query to corresponding sensor network.", "title": "" }, { "docid": "4d11eca5601f5128801a8159a154593a", "text": "Polymorphic malware belong to the class of host based threats which defy signature based detection mechanisms. Threat actors use various code obfuscation methods to hide the code details of the polymorphic malware and each dynamic iteration of the malware bears different and new signatures therefore makes its detection harder by signature based antimalware programs. Sandbox based detection systems perform syntactic analysis of the binary files to find known patterns from the un-encrypted segment of the malware file. Anomaly based detection systems can detect polymorphic threats but generate enormous false alarms. In this work, authors present a novel cognitive framework using semantic features to detect the presence of polymorphic malware inside a Microsoft Windows host using a process tree based temporal directed graph. Fractal analysis is performed to find cognitively distinguishable patterns of the malicious processes containing polymorphic malware executables. The main contributions of this paper are; the presentation of a graph theoretic approach for semantic characterization of polymorphism in the operating system's process tree, and the cognitive feature extraction of the polymorphic behavior for detection over a temporal process space.", "title": "" }, { "docid": "330438e58f75c21605cde6c4df1c8802", "text": "Visual surveillance from low-altitude airborne platforms has been widely addressed in recent years. Moving vehicle detection is an important component of such a system, which is a very challenging task due to illumination variance and scene complexity. Therefore, a boosting Histogram Orientation Gradients (boosting HOG) feature is proposed in this paper. This feature is not sensitive to illumination change and shows better performance in characterizing object shape and appearance. Each of the boosting HOG feature is an output of an adaboost classifier, which is trained using all bins upon a cell in traditional HOG features. All boosting HOG features are combined to establish the final feature vector to train a linear SVM classifier for vehicle classification. Compared with classical approaches, the proposed method achieved better performance in higher detection rate, lower false positive rate and faster detection speed.", "title": "" }, { "docid": "ebaf73ec27127016f3327e6a0b88abff", "text": "A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can’t communicate about their medical history to the medical practitioners. Also, Medical practitioners can’t edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.", "title": "" }, { "docid": "519241b84a8a18cae31a35a291d3bce1", "text": "Recent work in neural machine translation has shown promising performance, but the most effective architectures do not scale naturally to large vocabulary sizes. We propose and compare three variable-length encoding schemes that represent a large vocabulary corpus using a much smaller vocabulary with no loss in information. Common words are unaffected by our encoding, but rare words are encoded using a sequence of two pseudo-words. Our method is simple and effective: it requires no complete dictionaries, learning procedures, increased training time, changes to the model, or new parameters. Compared to a baseline that replaces all rare words with an unknown word symbol, our best variable-length encoding strategy improves WMT English-French translation performance by up to 1.7 BLEU.", "title": "" }, { "docid": "55a758993adaebafee431732f671730d", "text": "BACKGROUND\nEpitheliogenesis imperfecta in horses was first recognized at the beginning of the 20th century when it was proposed that the disease could have a genetic cause and an autosomal recessive inheritance pattern. Electron microscopy studies confirmed that the lesions were characterized by a defect in the lamina propria and the disease was therefore reclassified as epidermolysis bullosa. Molecular studies targeted two mutations affecting genes involved in dermal-epidermal junction: an insertion in LAMC2 in Belgians and other draft breeds and one large deletion in LAMA3 in American Saddlebred.\n\n\nCASE PRESENTATION\nA mechanobullous disease was suspected in a newborn, Italian draft horse foal, which presented with multifocal to coalescing erosions and ulceration on the distal extremities. Histological examination of skin biopsies revealed a subepidermal cleft formation and transmission electron microscopy demonstrated that the lamina densa of the basement membrane remained attached to the dermis. According to clinical, histological and ultrastructural findings, a diagnosis of junctional epidermolysis bullosa (JEB) was made. Genetic tests confirmed the presence of 1368insC in LAMC2 in the foal and its relatives.\n\n\nCONCLUSION\nThis is the first report of JEB in Italy. The disease was characterized by typical macroscopic, histologic and ultrastructural findings. Genetic tests confirmed the presence of the 1368insC in LAMC2 in this case: further investigations are required to assess if the mutation could be present at a low frequency in the Italian draft horse population. Atypical breeding practices are responsible in this case and played a role as odds enhancer for unfavourable alleles. Identification of carriers is fundamental in order to prevent economic loss for the horse industry.", "title": "" }, { "docid": "7b4567b9f32795b267f2fb2d39bbee51", "text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.", "title": "" }, { "docid": "5ef7a618db00daa44eb6596d65f29e67", "text": "Mobile phones are becoming de facto pervasive devices for people's daily use. This demonstration illustrates a new interaction, Tilt & Touch, to enable a smart phone to be a 3D controller. It exploits capacitive touchscreen and built-in MEMS motion sensors. When people want to navigate in a virtual reality environment on a large display, they can tilt the phone for viewpoint transforming, touch the phone screen for avatar moving, and pinch screen for viewing camera zooming. The virtual objects in the virtual reality environment can be rotated accordingly by tilting the phone.", "title": "" }, { "docid": "62b6c1caae1ff1e957a5377692898299", "text": "We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.", "title": "" }, { "docid": "2512c057299a86d3e461a15b67377944", "text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.", "title": "" }, { "docid": "e3f392ea43d435e08dc8996902fb6349", "text": "In nanopore sequencing devices, electrolytic current signals are sensitive to base modifications, such as 5-methylcytosine (5-mC). Here we quantified the strength of this effect for the Oxford Nanopore Technologies MinION sequencer. By using synthetically methylated DNA, we were able to train a hidden Markov model to distinguish 5-mC from unmethylated cytosine. We applied our method to sequence the methylome of human DNA, without requiring special steps for library preparation.", "title": "" }, { "docid": "75224d6b143cb7987339ad864d9a91d6", "text": "Videos contain very rich semantics and are intrinsically multimodal. In this paper, we study the challenging task of classifying videos according to their high-level semantics such as human actions or complex events. Although extensive efforts have been paid to study this problem, most existing works combined multiple features using simple fusion strategies and neglected the exploration of inter-class semantic relationships. In this paper, we propose a novel unified framework that jointly learns feature relationships and exploits the class relationships for improved video classification performance. Specifically, these two types of relationships are learned and utilized by rigorously imposing regularizations in a deep neural network (DNN). Such a regularized DNN can be efficiently launched using a GPU implementation with an affordable training cost. Through arming the DNN with better capability of exploring both the inter-feature and the inter-class relationships, the proposed regularized DNN is more suitable for identifying video semantics. With extensive experimental evaluations, we demonstrate that the proposed framework exhibits superior performance over several state-of-the-art approaches. On the well-known Hollywood2 and Columbia Consumer Video benchmarks, we obtain to-date the best reported results: 65.7% and 70.6% respectively in terms of mean average precision.", "title": "" }, { "docid": "93a3895a03edcb50af74db901cb16b90", "text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.", "title": "" }, { "docid": "52d2ff16f6974af4643a15440ae09fec", "text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.", "title": "" }, { "docid": "105e7c3ee917db6aeb51feb6c93a0396", "text": "Efficient beam alignment is a crucial component in millimeter wave systems with analog beamforming, especially in fast-changing vehicular settings. This paper proposes to use the vehicle's position (e.g., available via GPS) to query a multipath fingerprint database, which provides prior knowledge of potential pointing directions for reliable beam alignment. The approach is the inverse of fingerprinting localization, where the measured multipath signature is compared to the fingerprint database to retrieve the most likely position. The power loss probability is introduced as a metric to quantify misalignment accuracy and is used for optimizing candidate beam selection. Two candidate beam selection methods are developed, where one is a heuristic while the other minimizes the misalignment probability. The proposed beam alignment is evaluated using realistic channels generated from a commercial ray-tracing simulator. Using the generated channels, an extensive investigation is provided, which includes the required measurement sample size to build an effective fingerprint, the impact of measurement noise, the sensitivity to changes in traffic density, and beam alignment overhead comparison with IEEE 802.11ad as the baseline. Using the concept of beam coherence time, which is the duration between two consecutive beam alignments, and parameters of IEEE 802.11ad, the overhead is compared in the mobility context. The results show that while the proposed approach provides increasing rates with larger antenna arrays, IEEE 802.11ad has decreasing rates due to the higher beam training overhead that eats up a large portion of the beam coherence time, which becomes shorter with increasing mobility.", "title": "" } ]
scidocsrr
4cf5e23bfa399b15551e78f8fdee0a3b
Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features
[ { "docid": "6f0283efa932663c83cc2c63d19fd6cf", "text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.", "title": "" } ]
[ { "docid": "cfb7a8e268662a4e442dc33c8978585b", "text": "Air Traffic Control (ATC) plays a crucial role in the modern air transportation system. As a decentralized system, every control sector in the ATC network system needs to use all sorts of available information to manage local air traffic in a safe, smooth and cost-efficient way. A key issue is: how each individual ATC sector should use global traffic information to make local ATC decisions, such that the global air traffic, not just the local, can be improved. This paper reports a simulation study on ATC strategies aiming to address the above issue. The coming-in traffic to sectors is the focus, and the ATC strategy means how to define and apply various local ATC rules, such as first-come-first-served rule, to the coming-in traffic according to the global traffic information. A simplified ATC network model is set up and a software simulation system is then developed. The simulation results reveal that, even for a same set of ATC rules, a bad strategy of applying them can cause heavy traffic congestion, while a good strategy can significantly reduce delays, improve safety, and increase efficiency of using airspace.", "title": "" }, { "docid": "7b5be6623ad250bea3b84c86c6fb0000", "text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.", "title": "" }, { "docid": "5945081c099c883d238dca2a1dfc821e", "text": "Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5 % of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.", "title": "" }, { "docid": "4f3b56706fe6ac8f277fe6df5d2c7936", "text": "Berle and Means (1932) claimed that “modern” corporations in the United States had diffuse ownership. Their claim characterizes regulated electric utilities until the mid1990s. But following the 1992 EPACT deregulation, block ownership in utilities increased sharply, relative to a matched sample of non-utilities, which already had large blocks by the 90s. The post-EPACT blocks in utilities were not long-term investments by monitors but short duration speculative investments. With the financial crisis of 2008, large active investors are replaced by asset managers. In sum, dispersed ownership arose through the regulatory protections introduced in the 1930s and disappeared following their repeal. 1 We owe special thanks to Nicholas Beauchamp, Katrina Evtimova, Daniel Kangmin Ko, Umberto Mignozzetti, Jin Nie, Peter Ryan, Becci Weiss, and Wei Xiong for superb research assistance. We are also grateful to Jas Sekhon and John Henderson for advice on matching. We thank Colin Mayer and Wei Jiang for their suggestions at early stages of our project. We are grateful to the following research centers at Columbia University for their financial support: the Center for Global Economic Governance, the Chazen Institute, the Institute for Social and Economic Research and Policy, and the Richman Center. 2", "title": "" }, { "docid": "964f4f8c14432153d6001d961a1b5294", "text": "Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.", "title": "" }, { "docid": "d75f9c632d197040c7f6d2939b19c215", "text": "OBJECTIVE\nTo understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.\n\n\nDESIGN\nA complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that beta amyloid, a protein accumulated in the brain in Alzheimer's disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.\n\n\nMAIN OUTCOME MEASURES\nCitation bias, amplification, and invention, and their effects on determining authority.\n\n\nRESULTS\nThe network contained 242 papers and 675 citations addressing the belief, with 220,553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.\n\n\nCONCLUSION\nCitation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.", "title": "" }, { "docid": "7ca1c9096c6176cb841ae7f0e7262cb7", "text": "“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.", "title": "" }, { "docid": "44bbc67f44f4f516db97b317ae16a22a", "text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.", "title": "" }, { "docid": "7d017a5a6116a08cc9009a2f009af120", "text": "Route Designer, version 1.0, is a new retrosynthetic analysis package that generates complete synthetic routes for target molecules starting from readily available starting materials. Rules describing retrosynthetic transformations are automatically generated from reaction databases, which ensure that the rules can be easily updated to reflect the latest reaction literature. These rules are used to carry out an exhaustive retrosynthetic analysis of the target molecule, in which heuristics are used to mitigate the combinatorial explosion. Proposed routes are prioritized by an empirical rating algorithm to present a diverse profile of the most promising solutions. The program runs on a server with a web-based user interface. An overview of the system is presented together with examples that illustrate Route Designer's utility.", "title": "" }, { "docid": "c39063eadab37a16013076644a73216f", "text": "Abundantly expressed in fetal tissues and adult muscle, the developmentally regulated H19 long noncoding RNA (lncRNA) has been implicated in human genetic disorders and cancer. However, how H19 acts to regulate gene function has remained enigmatic, despite the recent implication of its encoded miR-675 in limiting placental growth. We noted that vertebrate H19 harbors both canonical and noncanonical binding sites for the let-7 family of microRNAs, which plays important roles in development, cancer, and metabolism. Using H19 knockdown and overexpression, combined with in vivo crosslinking and genome-wide transcriptome analysis, we demonstrate that H19 modulates let-7 availability by acting as a molecular sponge. The physiological significance of this interaction is highlighted in cultures in which H19 depletion causes precocious muscle differentiation, a phenotype recapitulated by let-7 overexpression. Our results reveal an unexpected mode of action of H19 and identify this lncRNA as an important regulator of the major let-7 family of microRNAs.", "title": "" }, { "docid": "0a3bb33d5cff66346a967092202737ab", "text": "An Li-ion battery charger based on a charge-control buck regulator operating at 2.2 MHz is implemented in 180 nm CMOS technology. The novelty of the proposed charge-control converter consists of regulating the average output current by only sensing a portion of the inductor current and using an adaptive reference voltage. By adopting this approach, the charger average output current is set to a constant value of 900 mA regardless of the battery voltage variation. In constant-voltage (CV) mode, a feedback loop is established in addition to the preexisting current control loop, preserving the smoothness of the output voltage at the transition from constant-current (CC) to CV mode. A small-signal model has been developed to analyze the system stability and subharmonic oscillations at low current levels. Transistor-level simulations of the proposed switching charger are presented. The output voltage ranges from 2.1 to 4.2 V, and the power efficiency at 900 mA has been measured to be 86% for an input voltage of 10 V. The accuracy of the output current using the proposed sensing technique is 9.4% at 10 V.", "title": "" }, { "docid": "78a0898f35113547cdc3adb567ad7afb", "text": "Phishing is a form of online identity theft. Phishers use social engineering to steal victims' personal identity data and financial account credentials. Social engineering schemes use spoofed e-mails to lure unsuspecting victims into counterfeit websites designed to trick recipients into divulging financial data such as credit card numbers, account usernames, passwords and social security numbers. This is called a deceptive phishing attack. In this paper, a thorough overview of a deceptive phishing attack and its countermeasure techniques, which is called anti-phishing, is presented. Firstly, technologies used by phishers and the definition, classification and future works of deceptive phishing attacks are discussed. Following with the existing anti-phishing techniques in literatures and research-stage technologies are shown, and a thorough analysis which includes the advantages and shortcomings of countermeasures is given. At last, we show the research of why people fall for phishing attack.", "title": "" }, { "docid": "cae269a1eee20846aa2ea83cbf1d0ecc", "text": "Metformin has utility in cancer prevention and treatment, though the mechanisms for these effects remain elusive. Through genetic screening in C. elegans, we uncover two metformin response elements: the nuclear pore complex (NPC) and acyl-CoA dehydrogenase family member-10 (ACAD10). We demonstrate that biguanides inhibit growth by inhibiting mitochondrial respiratory capacity, which restrains transit of the RagA-RagC GTPase heterodimer through the NPC. Nuclear exclusion renders RagC incapable of gaining the GDP-bound state necessary to stimulate mTORC1. Biguanide-induced inactivation of mTORC1 subsequently inhibits growth through transcriptional induction of ACAD10. This ancient metformin response pathway is conserved from worms to humans. Both restricted nuclear pore transit and upregulation of ACAD10 are required for biguanides to reduce viability in melanoma and pancreatic cancer cells, and to extend C. elegans lifespan. This pathway provides a unified mechanism by which metformin kills cancer cells and extends lifespan, and illuminates potential cancer targets. PAPERCLIP.", "title": "" }, { "docid": "a4343ae3aa7d793e5c8483550c04a623", "text": "The availability of massive data and computing power allowing for effective data driven neural approaches is having a major impact on machine learning and information retrieval research, but these models have a basic problem with efficiency. Current neural ranking models are implemented as multistage rankers: for efficiency reasons, the neural model only re-ranks the top ranked documents retrieved by a first-stage efficient ranker in response to a given query. Neural ranking models learn dense representations causing essentially every query term to match every document term, making it highly inefficient or intractable to rank the whole collection. The reliance on a first stage ranker creates a dual problem: First, the interaction and combination effects are not well understood. Second, the first stage ranker serves as a \"gate-keeper\" or filter, effectively blocking the potential of neural models to uncover new relevant documents. In this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document. This representation captures the semantic relationship between the query and documents, but is also sparse enough to enable constructing an inverted index for the whole collection. We parameterize the sparsity of the model to yield a retrieval model as efficient as conventional term based models. Our model gains in efficiency without loss of effectiveness: it not only outperforms the existing term matching baselines, but also performs similarly to the recent re-ranking based neural models with dense representations. Our model can also take advantage of pseudo-relevance feedback for further improvements. More generally, our results demonstrate the importance of sparsity in neural IR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.", "title": "" }, { "docid": "e97f74244a032204e49d9306032f09a7", "text": "For the discovery of biomarkers in the retinal vasculature it is essential to classify vessels into arteries and veins. We automatically classify retinal vessels as arteries or veins based on colour features using a Gaussian Mixture Model, an Expectation-Maximization (GMM-EM) unsupervised classifier, and a quadrant-pairwise approach. Classification is performed on illumination-corrected images. 406 vessels from 35 images were processed resulting in 92% correct classification (when unlabelled vessels are not taken into account) as compared to 87.6%, 90.08%, and 88.28% reported in [12] [14] and [15]. The classifier results were compared against two trained human graders to establish performance parameters to validate the success of classification method. The proposed system results in specificity of (0.8978, 0.9591) and precision (positive predicted value) of (0.9045, 0.9408) as compared to specificity of (0.8920, 0.7918) and precision of (0.8802, 0.8118) for (arteries, veins) respectively as reported in [13]. The classification accuracy was found to be 0.8719 and 0.8547 for veins and arteries, respectively.", "title": "" }, { "docid": "6d8239638a5581958071f4fb78f0596b", "text": "This article presents the formal semantics of a large subset of the C language called Clight. Clight includes pointer arithmetic, struct and union types, C loops and structured switch statements. Clight is the source language of the CompCert verified compiler. The formal semantics of Clight is a big-step operational semantics that observes both terminating and diverging executions and produces traces of input/output events. The formal semantics of Clight is mechanized using the Coq proof assistant. In addition to the semantics of Clight, this article describes its integration in the CompCert verified compiler and several ways by which the semantics was validated.", "title": "" }, { "docid": "d5233cdbe0044f2296be6136f459edcf", "text": "Road detection is one of the key issues of scene understanding for Advanced Driving Assistance Systems (ADAS). Recent approaches has addressed this issue through the use of different kinds of sensors, features and algorithms. KITTI-ROAD benchmark has provided an open-access dataset and standard evaluation mean for road area detection. In this paper, we propose an improved road detection algorithm that provides a pixel-level confidence map. The proposed approach is inspired from our former work based on road feature extraction using illuminant intrinsic image and plane extraction from v-disparity map segmentation. In the former research, detection results of road area are represented by binary map. The novelty of this improved algorithm is to introduce likelihood theory to build a confidence map of road detection. Such a strategy copes better with ambiguous environments, compared to a simple binary map. Evaluations and comparisons of both, binary map and confidence map, have been done using the KITTI-ROAD benchmark.", "title": "" }, { "docid": "2b745b41b0495ab7adad321080ce2228", "text": "In any teaching and learning setting, there are some variables that play a highly significant role in both teachers’ and learners’ performance. Two of these influential psychological domains in educational context include self-efficacy and burnout. This study is conducted to investigate the relationship between the self-efficacy of Iranian teachers of English and their reports of burnout. The data was collected through application of two questionnaires. The Maslach Burnout Inventory (MBI; Maslach& Jackson 1981, 1986) and Teacher Efficacy Scales (Woolfolk& Hoy, 1990) were administered to ten university teachers. After obtaining the raw data, the SPSS software (version 16) was used to change the data into numerical interpretable forms. In order to determine the relationship between self-efficacy and teachers’ burnout, correlational analysis was employed. The results showed that participants’ self-efficacy has a reverse relationship with their burnout.", "title": "" }, { "docid": "1d6b58df486d618341cea965724a7da9", "text": "The focus on human capital as a driver of economic growth for developing countries has led to undue attention on school attainment. Developing countries have made considerable progress in closing the gap with developed countries in terms of school attainment, but recent research has underscored the importance of cognitive skills for economic growth. This result shifts attention to issues of school quality, and there developing countries have been much less successful in closing the gaps with developed countries. Without improving school quality, developing countries will find it difficult to improve their long run economic performance. JEL Classification: I2, O4, H4 Highlights: ! ! Improvements in long run growth are closely related to the level of cognitive skills of the population. ! ! Development policy has inappropriately emphasized school attainment as opposed to educational achievement, or cognitive skills. ! ! Developing countries, while improving in school attainment, have not improved in quality terms. ! ! School policy in developing countries should consider enhancing both basic and advanced skills.", "title": "" }, { "docid": "af40c4fe439738a72ee6b476aeb75f82", "text": "Object tracking is still a critical and challenging problem with many applications in computer vision. For this challenge, more and more researchers pay attention to applying deep learning to get powerful feature for better tracking accuracy. In this paper, a novel triplet loss is proposed to extract expressive deep feature for object tracking by adding it into Siamese network framework instead of pairwise loss for training. Without adding any inputs, our approach is able to utilize more elements for training to achieve more powerful feature via the combination of original samples. Furthermore, we propose a theoretical analysis by combining comparison of gradients and back-propagation, to prove the effectiveness of our method. In experiments, we apply the proposed triplet loss for three real-time trackers based on Siamese network. And the results on several popular tracking benchmarks show our variants operate at almost the same frame-rate with baseline trackers and achieve superior tracking performance than them, as well as the comparable accuracy with recent state-of-the-art real-time trackers.", "title": "" } ]
scidocsrr
6b5a82698a4a09e4b2ebd303091d8a6e
A Bayesian game approach for intrusion detection in wireless ad hoc networks
[ { "docid": "bd5e127cc3454bbf8a89c3f7d66fd624", "text": "Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.", "title": "" }, { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" }, { "docid": "ef018fb8fdfbd0de2797cb6328dbb38a", "text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a new, general mechanism, called packet leashes, for detecting and thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes.", "title": "" } ]
[ { "docid": "4538c5874872a0081593407d09e4c6fa", "text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.", "title": "" }, { "docid": "4c8412dca4cbc9f65d29fffa95dee288", "text": "This paper deals with fundamental change processes in socio-technical systems. It offers a typology of changes based on a multi-level perspective of innovation. Three types of change processes are identified: reproduction, transformation and transition. ‘Reproduction’ refers to incremental change along existing trajectories. ‘Transformation’ refers to a change in the direction of trajectories, related to a change in rules that guide innovative action. ‘Transition’ refers to a discontinuous shift to a new trajectory and system. Using the multi-level perspective, the underlying mechanisms of these change processes are identified. The transformation and transition processes are empirically illustrated by two contrasting case studies: the hygienic transition from cesspools to integrated sewer systems (1870–1930) and the transformation in waste management (1960–2000) in the Netherlands. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "82b559efc5d3cd552a6322ff63007825", "text": "OBJECTIVE\nThe purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump.\n\n\nBACKGROUND\nData support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming.\n\n\nMETHOD\nNine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming).\n\n\nRESULTS\nIncreases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant.\n\n\nAPPLICATIONS\nComplex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.", "title": "" }, { "docid": "4dc8b11b9123c6a25dcf4765d77cb6ca", "text": "Accurate and reliable information about land use and land cover is essential for change detection and monitoring of the specified area. It is also useful in the updating the geographical information about the area. Over the past decade, a significant amount of research has been conducted concerning the application of different classifier and image fusion technique in this area. In this paper, introductions to the land use and land cover classification techniques are given and the results from a number of different techniques are compared. It has been found that, in general fusion technique perform better than either conventional classifier or supervised/unsupervised classification.", "title": "" }, { "docid": "96aced9d0a24431f303eec8b5293c93f", "text": "The discourse properties of text have long been recognized as critical to language technology, and over the past 40 years, our understanding of and ability to exploit the discourse properties of text has grown in many ways. This essay briefly recounts these developments, the technology they employ, the applications they support, and the new challenges that each subsequent development has raised. We conclude with the challenges faced by our current understanding of discourse, and the applications that meeting these challenges will promote. 1 Why bother with discourse? Research in Natural Language Processing (NLP) has long benefitted from the fact that text can often be treated as simply a bag of words or a bag of sentences. But not always: Position often matters — e.g., It is well-known that the first one or two sentences in a news report usually comprise its best extractive summary. Order often matters – e.g., very different events are conveyed depending on how clauses and sentences are ordered. (1) a. I said the magic words, and a genie appeared. b. A genie appeared, and I said the magic words. Adjacency often matters — e.g., attributed material may span a sequence of adjacent sentences, and contrasts are visible through sentence juxtaposition. Context always matters — e.g., All languages achieve economy through minimal expressions that can only convey intended meaning when understood in context. Position, order, adjacency and context are intrinsic features of discourse, and research on discourse processing attempts to solve the challenges posed by context-bound expressions and the discourse structures that give rise, when linearized, to position, order and adjacency. But challenges are not why Language Technology (LT) researchers should care about discourse: Rather, discourse can enable LT to overcome known obstacles to better performance. Consider automated summarization and machine translation: Humans regularly judge output quality in terms that include referential clarity and coherence. Systems can only improve here by paying attention to discourse — i.e., to linguistic features above the level of ngrams and single sentences. (In fact, we predict that as soon as cheap — i.e., non-manual – methods are found for reliably assessing these features — for example, using proxies like those suggested in (Pitler et al., 2010) — they will supplant, or at least complement today’s common metrics, Bleu and Rouge that say little about what matters to human text understanding (Callison-Burch et al., 2006).) Consider also work on automated text simplification: One way that human editors simplify text is by re-expressing a long complex sentence as a discourse sequence of simple sentences. Researchers should be able to automate this through understanding the various ways that information is conveyed in discourse. Other examples of LT applications already benefitting from recognizing and applying discourse-level information include automated assessment of student essays (Burstein and Chodorow, 2010); summarization (Thione et al., 2004), infor-", "title": "" }, { "docid": "e637dc1aee0632f61a29c8609187a98b", "text": "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.", "title": "" }, { "docid": "413df06d6ba695aa5baa13ea0913c6e6", "text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.", "title": "" }, { "docid": "e48dae70582d949a60a5f6b5b05117a7", "text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.", "title": "" }, { "docid": "68a90df0f3de170d64d3245c8b316460", "text": "In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model [Mnih et al. (2016)]) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents’ information [Lample & Chaplot (2016)]. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35% higher score than the second place.", "title": "" }, { "docid": "9e93c2ecfd268f36d0da9e43ab63baa8", "text": "We present new and review existing algorithms for the numerical integration of multivariate functions defined over d-dimensional cubes using several variants of the sparse grid method first introduced by Smolyak [49]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor products of suitable one-dimensional formulas. The computing cost is almost independent of the dimension of the problem if the function under consideration has bounded mixed derivatives. We suggest the usage of extended Gauss (Patterson) quadrature formulas as the one‐dimensional basis of the construction and show their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, Clenshaw–Curtis and Gauss rules in several numerical experiments and applications. For the computation of path integrals further improvements can be obtained by combining generalized Smolyak quadrature with the Brownian bridge construction.", "title": "" }, { "docid": "81291c707a102fac24a9d5ab0665238d", "text": "CAN bus is ISO international standard serial communication protocol. It is one of the most widely used fieldbus in the world. It has become the standard bus of embedded industrial control LAN. Ethernet is the most common communication protocol standard that is applied in the existing LAN. Networked industrial control usually adopts fieldbus and Ethernet network, thus the protocol conversion problems of the heterogeneous network composed of Ethernet and CAN bus has become one of the research hotspots in the technology of the industrial control network. STM32F103RC ARM microprocessor was used in the design of the Ethernet-CAN protocol conversion module, the simplified TCP/IP communication protocol uIP protocol was adopted to improve the efficiency of the protocol conversion and guarantee the stability of the system communication. The results of the experiments show that the designed module can realize high-speed and transparent protocol conversion.", "title": "" }, { "docid": "4a97f2f6bcd9ea1c1cd1bd925529fa4f", "text": "OBJECTIVE\nArousal (AR) from sleep is associated with an autonomic reflex activation raising blood pressure and heart rate (HR). Recent studies indicate that sleep deprivation may affect the autonomic system, contributing to high vascular risk. Since in sleep disorders a sleep fragmentation and a partial sleep deprivation occurs, it could be suggested that the cardiovascular effects observed at AR from sleep might be physiologically affected when associated with sleep deprivation. The aim of the study was to examine the effect of sleep deprivation on cardiac arousal response in healthy subjects.\n\n\nMETHODS\nSeven healthy male subjects participated in a 64 h sleep deprivation protocol. Arousals were classified into four groups, i.e. >3<6 s, >6<10 s, >10<15 s and >15 s, according to their duration. Pre-AR HR values were measured during 10 beats preceding the AR onset, and the event-related HR fluctuations were calculated during the 20 beats following AR onset. As an index of cardiac activation, the ratio of highest HR in the post-AR period over the lowest recorded before AR (HR ratio) was calculated.\n\n\nRESULTS\nFor AR lasting less than 10 s, the occurrence of AR induces typical HR oscillations in a bimodal pattern, tachycardia followed by bradycardia. For AR lasting more than 10 s, i.e. awakenings, the pattern was unimodal with a more marked and sustained HR rise. The HR response was consistently similar across nights, during NREM and REM sleep, without difference between conditions.\n\n\nCONCLUSIONS\nOverall, total sleep deprivation appeared to have no substantial effect on cardiac response to spontaneous arousals and awakenings from sleep in healthy subjects. Further studies are needed to clarify the role of chronic sleep deprivation on cardiovascular risk in patients with sleep disorders.\n\n\nSIGNIFICANCE\nIn healthy subjects acute prolonged sleep deprivation does not affect the cardiac response to arousal.", "title": "" }, { "docid": "f5e14c4bf03acb092abc4b00d913e6f3", "text": "In incoherent Direct Sequence Optical Code Division Multiple Access system (DSOCDMA), the Multiple Access Interference (MAI) is one of the main limitations. To mitigate the MAI, many types of codes can be used to remove the contributions from users. In this paper, we study two types of unipolar codes used in DS-OCDMA system incoherent which are optical orthogonal codes (OOC) and the prime code (PC). We developed the characteristics of these codes i,e factors correlations, and the theoretical upper bound of the probability of error. The simulation results showed that PC codes have better performance than OOC codes.", "title": "" }, { "docid": "4deb101ba94ef958cfe84610f2abccc4", "text": "Iris recognition is considered to be the most reliable and accurate biometric identification system available. Iris recognition system captures an image of an individual’s eye, the iris in the image is then meant for the further segmentation and normalization for extracting its feature. The performance of iris recognition systems depends on the process of segmentation. Segmentation is used for the localization of the correct iris region in the particular portion of an eye and it should be done accurately and correctly to remove the eyelids, eyelashes, reflection and pupil noises present in iris region. In our paper we are using Daughman’s Algorithm segmentation method for Iris Recognition. Iris images are selected from the CASIA Database, then the iris and pupil boundary are detected from rest of the eye image, removing the noises. The segmented iris region was normalized to minimize the dimensional inconsistencies between iris regions by using Daugman’s Rubber Sheet Model. Then the features of the iris were encoded by convolving the normalized iris region with 1D Log-Gabor filters and phase quantizing the output in order to produce a bit-wise biometric template. The Hamming distance was chosen as a matching metric, which gave the measure of how many bits disagreed between the templates of the iris. Index Terms —Daughman’s Algorithm, Daugman’s Rubber Sheet Model, Hamming Distance, Iris Recognition, segmentation.", "title": "" }, { "docid": "a816ad26a49e0cf90dadc4db6dcba6d4", "text": "Despite the recent advances of deep reinforcement learning (DRL), agents trained by DRL tend to be brittle and sensitive to the training environment, especially in the multi-agent scenarios. In the multi-agent setting, a DRL agent’s policy can easily get stuck in a poor local optima w.r.t. its training partners – the learned policy may be only locally optimal to other agents’ current policies. In this paper, we focus on the problem of training robust DRL agents with continuous actions in the multi-agent learning setting so that the trained agents can still generalize when its opponents’ policies alter. To tackle this problem, we proposed a new algorithm, MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) with the following contributions: (1) we introduce a minimax extension of the popular multi-agent deep deterministic policy gradient algorithm (MADDPG), for robust policy learning; (2) since the continuous action space leads to computational intractability in our minimax learning objective, we propose Multi-Agent Adversarial Learning (MAAL) to efficiently solve our proposed formulation. We empirically evaluate our M3DDPG algorithm in four mixed cooperative and competitive multi-agent environments and the agents trained by our method significantly outperforms existing baselines.", "title": "" }, { "docid": "d99302511e2eb17ce875d480d1bb78fc", "text": "Emojis allow us to describe objects, situations and even feelings with small images, providing a visual and quick way to communicate. In this paper, we analyse emojis used in Twitter with distributional semantic models. We retrieve 10 millions tweets posted by USA users, and we build several skip gram word embedding models by mapping in the same vectorial space both words and emojis. We test our models with semantic similarity experiments, comparing the output of our models with human assessment. We also carry out an exhaustive qualitative evaluation, showing interesting results.", "title": "" }, { "docid": "5158b5da8a561799402cb1ef3baa3390", "text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.", "title": "" }, { "docid": "1e638842d245472a0d8365b7da27b20a", "text": "How similar are the experiences of social rejection and physical pain? Extant research suggests that a network of brain regions that support the affective but not the sensory components of physical pain underlie both experiences. Here we demonstrate that when rejection is powerfully elicited--by having people who recently experienced an unwanted break-up view a photograph of their ex-partner as they think about being rejected--areas that support the sensory components of physical pain (secondary somatosensory cortex; dorsal posterior insula) become active. We demonstrate the overlap between social rejection and physical pain in these areas by comparing both conditions in the same individuals using functional MRI. We further demonstrate the specificity of the secondary somatosensory cortex and dorsal posterior insula activity to physical pain by comparing activated locations in our study with a database of over 500 published studies. Activation in these regions was highly diagnostic of physical pain, with positive predictive values up to 88%. These results give new meaning to the idea that rejection \"hurts.\" They demonstrate that rejection and physical pain are similar not only in that they are both distressing--they share a common somatosensory representation as well.", "title": "" }, { "docid": "04aa6c7ede8d418297e498d7a163f996", "text": "Dual active bridge (DAB) converters have been popular in high voltage, low and medium power DC-DC applications, as well as an intermediate high frequency link in solid state transformers. In this paper, a multilevel DAB (ML-DAB) has been proposed in which two active bridges produce two-level (2L)-5L, 5L-2L and 3L-5L voltage waveforms across the high frequency transformer. The proposed ML-DAB has the advantage of being used in high step-up/down converters, which deal with higher voltages, as compared to conventional two-level DABs. A three-level neutral point diode clamped (NPC) topology has been used in the high voltage bridge, which enables the semiconductor switches to be operated within a higher voltage range without the need for cascaded bridges or multiple two-level DAB converters. A symmetric modulation scheme, based on the least number of angular parameters rather than the duty-ratio, has been proposed for a different combination of bridge voltages. This ML-DAB is also suitable for maximum power point tracking (MPPT) control in photovoltaic applications. Steady-state analysis of the converter with symmetric phase-shift modulation is presented and verified using simulation and hardware experiments.", "title": "" } ]
scidocsrr
9d845ed970be20cea4601f39ba44ce41
Attacks on privacy and deFinetti's theorem
[ { "docid": "4daad9b24e477160999f350043125116", "text": "Recent research studied the problem of publishing microdata without revealing sensitive information, leading to the privacy preserving paradigms of k-anonymity and `-diversity. k-anonymity protects against the identification of an individual’s record. `-diversity, in addition, safeguards against the association of an individual with specific sensitive information. However, existing approaches suffer from at least one of the following drawbacks: (i) The information loss metrics are counter-intuitive and fail to capture data inaccuracies inflicted for the sake of privacy. (ii) `-diversity is solved by techniques developed for the simpler k-anonymity problem, which introduces unnecessary inaccuracies. (iii) The anonymization process is inefficient in terms of computation and I/O cost. In this paper we propose a framework for efficient privacy preservation that addresses these deficiencies. First, we focus on one-dimensional (i.e., single attribute) quasiidentifiers, and study the properties of optimal solutions for k-anonymity and `-diversity, based on meaningful information loss metrics. Guided by these properties, we develop efficient heuristics to solve the one-dimensional problems in linear time. Finally, we generalize our solutions to multi-dimensional quasi-identifiers using space-mapping techniques. Extensive experimental evaluation shows that our techniques clearly outperform the state-of-the-art, in terms of execution time and information loss.", "title": "" } ]
[ { "docid": "c4c3a0bccbf4e093750e1ef356d2f09c", "text": "We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN. This memory-enhanced RNN decoder is called MEMDEC. At each time during decoding, MEMDEC will read from this memory and write to this memory once, both with content-based addressing. Unlike the unbounded memory in previous work(Bahdanau et al., 2014) to store the representation of source sentence, the memory in MEMDEC is a matrix with predetermined size designed to better capture the information important for the decoding process at each time step. Our empirical study on Chinese-English translation shows that it can improve by 4.8 BLEU upon Groundhog and 5.3 BLEU upon on Moses, yielding the best performance achieved with the same training set.", "title": "" }, { "docid": "9933a9487c104907b3057c0280c18e0f", "text": "An improved rat-race coupler is described which, in the ideal case, has a pair of ports with infinite isolation between them at all frequencies. This improved rat-race coupler is useful in such applications as the design of balanced mixers with high LO to RF isolation.", "title": "" }, { "docid": "e14420212ec11882cc71a57fd68cbb08", "text": "Organizational ambidexterity refers to the ability of an organization to both explore and exploit—to compete in mature technologies and markets where efficiency, control, and incremental improvement are prized and to also compete in new technologies and markets where flexibility, autonomy, and experimentation are needed. In the past 15 years there has been an explosion of interest and research on this topic. We briefly review the current state of the research, highlighting what we know and don’t know about the topic. We close with a point of view on promising areas for ongoing research.", "title": "" }, { "docid": "b9e6d6d2625a713e8fa7491bc1b24223", "text": "Percutaneous radiofrequency ablation (RFA) is becoming a standard minimally invasive clinical procedure for the treatment of liver tumors. However, planning the applicator placement such that the malignant tissue is completely destroyed, is a demanding task that requires considerable experience. In this work, we present a fast GPU-based real-time approximation of the ablation zone incorporating the cooling effect of liver vessels. Weighted distance fields of varying RF applicator types are derived from complex numerical simulations to allow a fast estimation of the ablation zone. Furthermore, the heat-sink effect of the cooling blood flow close to the applicator's electrode is estimated by means of a preprocessed thermal equilibrium representation of the liver parenchyma and blood vessels. Utilizing the graphics card, the weighted distance field incorporating the cooling blood flow is calculated using a modular shader framework, which facilitates the real-time visualization of the ablation zone in projected slice views and in volume rendering. The proposed methods are integrated in our software assistant prototype for planning RFA therapy. The software allows the physician to interactively place virtual RF applicator models. The real-time visualization of the corresponding approximated ablation zone facilitates interactive evaluation of the tumor coverage in order to optimize the applicator's placement such that all cancer cells are destroyed by the ablation.", "title": "" }, { "docid": "f8984d660f39c66b3bd484ec766fa509", "text": "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information-security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these.", "title": "" }, { "docid": "dd8d88981fdbe20556407a025e3a4c3d", "text": "In this letter, a wideband dual-band magneto-electric dipole antenna with improved feeding structure is presented. A U-shaped electric dipole antenna is employed to generate the dual resonant frequencies. A folded shorted patch that works as a magnetic dipole antenna is assembled vertically to the ground plane. The antenna is excited by a novel feeding line with a polygonal structure that is designed to improve the impedance matching. Simulated and measured results show that the dual operation bands with bandwidths of 72% from 1.48 to 3.15 GHz and 21% from 4.67 to 5.78 GHz for |S11| <; -10 dB were achieved. Stable and symmetric unidirectional radiation patterns, low cross-polarization level, low back-radiation, and an antenna gain ranging from 6.5 to 9.1 dBi were obtained over the dual operating bands.", "title": "" }, { "docid": "7471de488292f4b4e5e62a432e11d719", "text": "This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author’s classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers’ understanding and increase their confidence using types.", "title": "" }, { "docid": "b1ffdb1e3f069b78458a2b464293d97a", "text": "We consider the detection of activities from non-cooperating individuals with features obtained on the radio frequency channel. Since environmental changes impact the transmission channel between devices, the detection of this alteration can be used to classify environmental situations. We identify relevant features to detect activities of non-actively transmitting subjects. In particular, we distinguish with high accuracy an empty environment or a walking, lying, crawling or standing person, in case-studies of an active, device-free activity recognition system with software defined radios. We distinguish between two cases in which the transmitter is either under the control of the system or ambient. For activity detection the application of one-stage and two-stage classifiers is considered. Apart from the discrimination of the above activities, we can show that a detected activity can also be localized simultaneously within an area of less than 1 meter radius.", "title": "" }, { "docid": "b695906441d6435cdb3e348a4f2c94f6", "text": "Wearable sensors have recently seen a large increase in both research and commercialization. However, success in wearable sensors has been a mix of both progress and setbacks. Most of commercial progress has been in smart adaptation of existing mechanical, electrical and optical methods of measuring the body. This adaptation has involved innovations in how to miniaturize sensing technologies, how to make them conformal and flexible, and in the development of companion software that increases the value of the measured data. However, chemical sensing modalities have experienced greater challenges in commercial adoption, especially for non-invasive chemical sensors. There have also been significant challenges in making significant fundamental improvements to existing mechanical, electrical, and optical sensing modalities, especially in improving their specificity of detection. Many of these challenges can be understood by appreciating the body's surface (skin) as more of an information barrier than as an information source. With a deeper understanding of the fundamental challenges faced for wearable sensors and of the state-of-the-art for wearable sensor technology, the roadmap becomes clearer for creating the next generation of innovations and breakthroughs.", "title": "" }, { "docid": "b3bb84322c28a9d0493d9b8a626666e4", "text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.", "title": "" }, { "docid": "7ba61c8c5eba7d8140c84b3e7cbc851a", "text": "One of the aims of modern First-Person Shooter (FPS ) design is to provide an immersive experience to the player. This paper examines the role of sound in enabling s uch immersion and argues that, even in ‘realism’ FPS ga mes, it may be achieved sonically through a focus on carica ture rather than realism. The paper utilizes and develo ps previous work in which both a conceptual framework for the d sign and analysis of run and gun FPS sound is developed and the notion of the relationship between player and FPS soundscape as an acoustic ecology is put forward (G rimshaw and Schott 2007a; Grimshaw and Schott 2007b). Some problems of sound practice and sound reproduction i n the game are highlighted and a conceptual solution is p roposed.", "title": "" }, { "docid": "2439ce82bb2008fb0495f8a0ad6553fc", "text": "This paper presents a switched state-space modeling approach for a switched-capacitor power amplifier. In contrast to state of the art behavioral models for nonlinear devices like power amplifiers, the state-space representation allows a straightforward inclusion of the nonidealities of the applied input sources. Hence, adding noise on a power supply or phase distortions on the carrier signal do not require a redesign of the mathematical model. The derived state-space model (SSM), which can be efficiently implemented in any numerical simulation tool, allows a significant reduction of the required simulation run-time (14x speedup factor) with respect to standard Cadence Spectre simulations. The derived state-space model (SSM) has been implemented in MATLAB/Simulink and its results have been verified by comparison with Cadence Spectre simulations.", "title": "" }, { "docid": "eb5208a4793fa5c5723b20da0421af26", "text": "High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future.", "title": "" }, { "docid": "36da9ac0c2a111a7bfed3b9f1df845e2", "text": "This paper has the purpose of describing a new approach on firmware update on automotive ECUs. The firmware update process for certain automotive ECUs requires excessive time through the CAN bus. By using the delta flashing concept, firmware update time can be greatly reduced by reducing the quantity of the data that is transmitted over the network.", "title": "" }, { "docid": "7f56cb986ec4a6022883595ff0d8faa5", "text": "Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when the data are unbalanced, which is common in many medical imaging applications, such as lesion segmentation, where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem, including two-step training, sample re-weighting, balanced sampling, and more recently, similarity loss functions and focal loss. In this paper, we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using $F_\\beta $ scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both the challenges. We compared the performance of our network trained with $F_\\beta $ loss, focal loss, and generalized Dice loss functions. Through September 2018, our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on $F_\\beta $ scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patch-wise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.", "title": "" }, { "docid": "e8f86dad01a7e3bd25bdabdc7a3d7136", "text": "In this paper, a wideband monopole antenna with high gain characteristics has been proposed. Number of slits was introduced at the far radiating edge to transform it to multiple monopole radiators. Partial ground plane has been used to widen the bandwidth while by inserting suitable slits at the radiating edges return loss and bandwidth has been improved. The proposed antenna provides high gain up to 13.2dB and the achieved impedance bandwidth is wider than an earlier reported design. FR4 Epoxy with dielectric constant 4.4 and loss tangent 0.02 has been used as substrate material. Antenna has been simulated using HFSS (High Frequency Structure Simulator) as a 3D electromagnetic field simulator, based on finite element method. A good settlement has been found between simulated and measured results. The proposed design is suitable for GSM (890-960MHz), GPS (L1:1575.42MHz, L2:1227.60MHz, L3:1381.05MHz, L4:1379.913MHz, L5:1176.45MHz), DCS (1710-1880MHz), PCS (1850-1990MHz), UMTS(1920-2170MHz), Wi-Fi/WLAN/Hiper LAN/IEEE 802.11 2.4GHz (2412-2484MHz), 3.6GHz (3657.5-3690.0MHz) and 4.9/5.0GHz (4915-5825MHz), Bluetooth (2400-2484MHz), WiMAX 2.3GHz (2.3-2.5GHz), 2.5GHz (2500-2690 MHz), 3.3GHz, 3.5GHz (3400-3600MHz) and 5.8GHz (5.6-5.9GHz) & LTE applications.", "title": "" }, { "docid": "bbdc213c082fd0573add260e99447f2d", "text": "Received: May 17, 2015. Received in revised form: October 15, 2015. Accepted: October 25, 2015. Although construction has been known as a highly complex application field for autonomous robotic systems, recent advances in this field offer great hope for using robotic capabilities to develop automated construction. Today, space research agencies seek to build infrastructures without human intervention, and construction companies look to robots with the potential to improve construction quality, efficiency, and safety, not to mention flexibility in architectural design. However, unlike production robots used, for instance, in automotive industries, autonomous robots should be designed with special consideration for challenges such as the complexity of the cluttered and dynamic working space, human-robot interactions and inaccuracy in positioning due to the nature of mobile systems and the lack of affordable and precise self-positioning solutions. This paper briefly reviews state-ofthe-art research into automated construction by autonomous mobile robots. We address and classify the relevant studies in terms of applications, materials, and robotic systems. We also identify ongoing challenges and discuss about future robotic requirements for automated construction.", "title": "" }, { "docid": "78cfd752153b96de918d6ebf4d6654cd", "text": "Machine learning is an integral technology many people utilize in all areas of human life. It is pervasive in modern living worldwide, and has multiple usages. One application is image classification, embraced across many spheres of influence such as business, finance, medicine, etc. to enhance produces, causes, efficiency, etc. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep Learning Algorithms. This article used Convolutional Neural Networks (CNN) to classify scenes in the CIFAR-10 database, and detect emotions in the KDEF database. The proposed method converted the data to the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By dividing image data into subbands, important feature learning occurred over differing low to high frequencies. The combination of the learned low and high frequency features, and processing the fused feature mapping resulted in an advance in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings revealed a substantial increase in accuracy.", "title": "" }, { "docid": "6fc1e0aa7525ea71176919b0ff8e7129", "text": "The hallmark of medial temporal lobe amnesia is a loss of episodic memory such that patients fail to remember new events that are set in an autobiographical context (an episode). A further symptom is a loss of recognition memory. The relationship between these two features has recently become contentious. Here, we focus on the central issue in this dispute — the relative contributions of the hippocampus and the perirhinal cortex to recognition memory. A resolution is vital not only for uncovering the neural substrates of these key aspects of memory, but also for understanding the processes disrupted in medial temporal lobe amnesia and the validity of animal models of this syndrome.", "title": "" }, { "docid": "7e2f657115b3c9163a7fe9b34d95a314", "text": "Even though several youth fatal suicides have been linked with school victimization, there is lack of evidence on whether cyberbullying victimization causes students to adopt suicidal behaviors. To investigate this issue, I use exogenous state-year variation in cyberbullying laws and information on high school students from the Youth Risk Behavioral Survey within a bivariate probit framework, and complement these estimates with matching techniques. I find that cyberbullying has a strong impact on all suicidal behaviors: it increases suicidal thoughts by 14.5 percentage points and suicide attempts by 8.7 percentage points. Even if the focus is on statewide fatal suicide rates, cyberbullying still leads to significant increases in suicide mortality, with these effects being stronger for men than for women. Since cyberbullying laws have an effect on limiting cyberbullying, investing in cyberbullying-preventing strategies can improve individual health by decreasing suicide attempts, and increase the aggregate health stock by decreasing suicide rates.", "title": "" } ]
scidocsrr
77f4af938394b1f1eea838a6ad3dffd9
Behind the Article: Recognizing Dialog Acts in Wikipedia Talk Pages
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "f06eab14e1422ad81e722d28431c6ed3", "text": "This paper explores teacher beliefs that influence the ways Information and Communications Technologies (ICT) are used in learning contexts. Much has been written about the impact of teachers' beliefs and attitudes to ICT as 'barriers' to ICT integration (Ertmer, Ottenbreit-paper takes a closer look at the types of beliefs that influence ICT practices in classrooms and the alignment of these beliefs to current pedagogical reform in Australia. The paper draws on data collected through the initial phase of a research project that involved an Industry Collaborative of four Catholic primary schools (prep-grade 7). Data are drawn from teacher surveys, interviews and document analysis. The results present specific links between ICT beliefs that are informing teachers' practices. ICT beliefs and practices are aligned to reform agenda for digital pedagogies. The findings of this research inform teacher ICT practice and requirements for ICT professional development.", "title": "" }, { "docid": "c4d7c0f4f585e805fd4c68f84c668cd5", "text": "Sparse representation of signals has been the focus of much research in the recent years. A vast majority of existing algorithms deal with vectors, and higher–order data like images are dealt with by vectorization. However, the structure of the data may be lost in the process, leading to a poorer representation and overall performance degradation. In this paper we propose a novel approach for sparse representation of positive definite matrices, where vectorization will destroy the inherent structure of the data. The sparse decomposition of a positive definite matrix is formulated as a convex optimization problem, which falls under the category of determinant maximization (MAXDET) problems [1], for which efficient interior point algorithms exist. Experimental results are shown with simulated examples as well as in real–world computer vision applications, demonstrating the suitability of the new model. This forms the first step toward extending the cornucopia of sparsity-based algorithms to positive definite matrices.", "title": "" }, { "docid": "eb1cdc68f06f9cb238c71cbac494a01a", "text": "This chapter introduces the reader to the various aspects of feature extraction covered in this book. Section 1 reviews definitions and notations and proposes a unified view of the feature extraction problem. Section 2 is an overview of the methods and results presented in the book, emphasizing novel contributions. Section 3 provides the reader with an entry point in the field of feature extraction by showing small revealing examples and describing simple but effective algorithms. Finally, Section 4 introduces a more theoretical formalism and points to directions of research and open problems.", "title": "" }, { "docid": "4fb391446ca62dc2aa52ce905d92b036", "text": "The frequency and intensity of natural disasters has increased significantly in recent decades, and this trend is expected to continue. Hence, understanding and predicting human evacuation behavior and mobility will play a vital role in planning effective humanitarian relief, disaster management, and long-term societal reconstruction. However, existing models are shallow models, and it is difficult to apply them for understanding the “deep knowledge” of human mobility. Therefore, in this study, we collect big and heterogeneous data (e.g., GPS records of 1.6 million users over 3 years, data on earthquakes that have occurred in Japan over 4 years, news report data, and transportation network data), and we build an intelligent system, namely, DeepMob, for understanding and predicting human evacuation behavior and mobility following different types of natural disasters. The key component of DeepMob is based on a deep learning architecture that aims to understand the basic laws that govern human behavior and mobility following natural disasters, from big and heterogeneous data. Furthermore, based on the deep learning model, DeepMob can accurately predict or simulate a person’s future evacuation behaviors or evacuation routes under different disaster conditions. Experimental results and validations demonstrate the efficiency and superior performance of our system, and suggest that human mobility following disasters may be predicted and simulated more easily than previously thought.", "title": "" }, { "docid": "ba203abd0bd55fc9d06fe979a604d741", "text": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on largescale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "title": "" }, { "docid": "41f9b137b7e7a0b1b02c45d1eef216f1", "text": "Personality is an important psychological construct accounting for individual differences in people. Computational personality recognition from online social networks is gaining increased research attention in recent years. However, the majority of existing methodologies mainly focused on human-designed shallow statistical features and didn’t make full use of the rich semantic information in user-generated texts, while those texts are exactly the most direct way for people to translate their internal thoughts and emotions into a form that others can understand. This paper proposes a deep learning-based approach for personality recognition from text posts of online social network users. We first utilize a hierarchical deep neural network composed of our newly designed AttRCNN structure and a variant of the Inception structure to learn the deep semantic features of each user’s text posts. Then we concatenate the deep semantic features with the statistical linguistic features obtained directly from the text posts, and feed them into traditional regression algorithms to predict the real-valued Big Five personality scores. Experimental results show that the deep semantic feature vectors learned from our proposed neural network are more effective than the other four kinds of non-trivial baseline features; the approach that utilizes the concatenation of our deep semantic features and the statistical linguistic features as the input of the gradient boosting regression algorithm achieves the lowest average prediction error among all the approaches tested by us.", "title": "" }, { "docid": "371d28cf9be2e7fa95ac26075b1e96ba", "text": "The noun compound – a sequence of nouns which function as a single noun – is very common in English texts. No language processing system should ignore expressions like steel soup pot cover if it wants to be serious about such high-end applications of computational linguistics as question answering, information extraction, text summarization, machine translation – the list goes on. Processing noun compounds, however, is far from trouble-free. For one thing, they can be bracketed in various ways: is it steel soup, steel pot or steel cover? Then there are relations inside a compound, annoyingly not signalled by any words: does pot contain soup or is it for cooking soup? These and many other research challenges are the subject of this special issue. The volume opens with Preslav Nakov’s survey paper on the interpretation of noun compounds. It serves as en excellent, thorough introduction to the whole business of studying noun compounds computationally. Both theoretical and computational linguistics consider various formal definitions of the compound, its creation, its types and properties, its applications, its approximation by paraphrases. The discussion is also illustrated by a range of languages other than English. Next, the problem of bracketing is given a few typical solutions. There follows a detailed look at noun compound semantics, including coarse-grained and very fine-grained inventories of relations among nouns in a compound. Finally, a “capstone” project is presented: textual entailment, a tool which can be immensely helpful in many high-end applications. Diarmuid Ó Séaghdha and Ann Copestake tell us how to interpret compound nouns by classifying their relations with kernel methods. The kernels implement intuitive notions of lexical and relational similarity which are computed using distributional information extracted from large text corpora. The classification is tested at three different levels of specificity. Impressively, in all cases a combination of both lexical and relational information improves upon either source taken alone.", "title": "" }, { "docid": "b4c5ddab0cb3e850273275843d1f264f", "text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.", "title": "" }, { "docid": "9d210dc8bc48e4ff9bf72c260f169ada", "text": "We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identiied by a determin-istic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.", "title": "" }, { "docid": "8ab53b0100ce36ace61660c9c8e208b4", "text": "A novel current-pumped battery charger (CPBC) is proposed in this paper to increase the Li-ion battery charging performance. A complete charging process, consisting of three subprocesses, namely: 1) the bulk current charging process; 2) the pulsed current charging process; and 3) the pulsed float charging process, can be automatically implemented by using the inherent characteristics of current-pumped phase-locked loop (CPLL). A design example for a 700-mA ldr h Li-ion battery is built to assess the CPBC's performance. In comparison with the conventional phase-locked battery charger, the battery available capacity and charging efficiency of the proposed CPBC are improved by about 6.9% and 1.5%, respectively. The results of the experiment show that a CPLL is really suitable for carrying out a Li-ion battery pulse charger.", "title": "" }, { "docid": "f6e1cb075098ca407ec6e98073702d90", "text": "In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM’s vocabulary set accommodates only small shortlist of most frequent words. This leads to suboptimal performance if an input speech contains many out-of-shortlist (OOS) words. An effective solution is to increase the shortlist size and retrain the entire network which is highly inefficient. Therefore, we propose an efficient method to expand the shortlist set of a pretrained RNNLM without incurring expensive retraining and using additional training data. Our method exploits the structure of RNNLM which can be decoupled into three parts: input projection layer, middle layers, and output projection layer. Specifically, our method expands the word embedding matrices in projection layers and keeps the middle layers unchanged. In this approach, the functionality of the pretrained RNNLM will be correctly maintained as long as OOS words are properly modeled in two embedding spaces. We propose to model the OOS words by borrowing linguistic knowledge from appropriate in-shortlist words. Additionally, we propose to generate the list of OOS words to expand vocabulary in unsupervised manner by automatically extracting them from ASR output.", "title": "" }, { "docid": "3da64db5e0d9474eb2194e73f71e0d6c", "text": "Standard cutaneous innervation maps show strict midline demarcation. Although authors of these maps accept variability of peripheral nerve distribution or occasionally even the midline overlap of cutaneous nerves, this concept seems to be neglected by many other anatomists. To support the statement that such transmedian overlap exists, we performed an extensive literature search and found ample evidence for all regions (head/neck, thorax/abdomen, back, perineum, and genitalia) that peripheral nerves cross the midline or communicate across the midline. This concept has substantial clinical implications, most notably in anesthesia and perineural tumor spread. This article serves as a springboard for future anatomical, clinical, and experimental research.", "title": "" }, { "docid": "4b9df4116960cd3e3300d87e4f97e1e9", "text": "Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.", "title": "" }, { "docid": "c612e2ad86429709d4eb567d4717f752", "text": "This brief presents a duty cycle corrector (DCC) using a binary search algorithm with successive approximation register (SAR). The proposed DCC consists of a duty-cycle detector, a duty-cycle adjuster, its controller and an output buffer. In order to achieve fast duty-correction with a small die area, a SAR-controller is exploited as a duty-correction controller. The proposed DCC circuit has been implemented and fabricated in a 0.13-μm CMOS process and occupies 0.048 mm2. The measured duty-cycle error for the 50% duty-rate is below 1% (or 10 pS) within 320 pS external input duty-cycle error. The duty of output signal is corrected only with 14 cycles. This DCC operates from 312.5 MHz to 1 GHz and dissipates 3.2 mW at 1 GHz.", "title": "" }, { "docid": "d972e23eb49c15488d2159a9137efb07", "text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.", "title": "" }, { "docid": "88dea71422ca32235579e03bf66a3e07", "text": "Compared to truly negative cultures, false-positive blood cultures not only increase laboratory work but also prolong lengths of patient stay and use of broad-spectrum antibiotics, both of which are likely to increase antibiotic resistance and patient morbidity. The increased patient suffering and surplus costs caused by blood culture contamination motivate substantial measures to decrease the rate of contamination, including the use of dedicated phlebotomy teams. The present study evaluated the effect of a simple informational intervention aimed at reducing blood culture contamination at Skåne University Hospital (SUS), Malmö, Sweden, during 3.5 months, focusing on departments collecting many blood cultures. The main examined outcomes of the study were pre- and postintervention contamination rates, analyzed with a multivariate logistic regression model adjusting for relevant determinants of contamination. A total of 51,264 blood culture sets were drawn from 14,826 patients during the study period (January 2006 to December 2009). The blood culture contamination rate preintervention was 2.59% and decreased to 2.23% postintervention (odds ratio, 0.86; 95% confidence interval, 0.76 to 0.98). A similar decrease in relevant bacterial isolates was not found postintervention. Contamination rates at three auxiliary hospitals did not decrease during the same period. The effect of the intervention on phlebotomists' knowledge of blood culture routines was also evaluated, with a clear increase in level of knowledge among interviewed phlebotomists postintervention. The present study shows that a relatively simple informational intervention can have significant effects on the level of contaminated blood cultures, even in a setting with low rates of contamination where nurses and auxiliary nurses conduct phlebotomies.", "title": "" }, { "docid": "5cfc4911a59193061ab55c2ce5013272", "text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "title": "" }, { "docid": "8624bdce9b571418f88f4adb52984462", "text": "Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed.", "title": "" }, { "docid": "de061c5692bf11876c03b9b5e7c944a0", "text": "The purpose of this article is to summarize several change theories and assumptions about the nature of change. The author shows how successful change can be encouraged and facilitated for long-term success. The article compares the characteristics of Lewin’s Three-Step Change Theory, Lippitt’s Phases of Change Theory, Prochaska and DiClemente’s Change Theory, Social Cognitive Theory, and the Theory of Reasoned Action and Planned Behavior to one another. Leading industry experts will need to continually review and provide new information relative to the change process and to our evolving society and culture. here are many change theories and some of the most widely recognized are briefly summarized in this article. The theories serve as a testimony to the fact that change is a real phenomenon. It can be observed and analyzed through various steps or phases. The theories have been conceptualized to answer the question, “How does successful change happen?” Lewin’s Three-Step Change Theory Kurt Lewin (1951) introduced the three-step change model. This social scientist views behavior as a dynamic balance of forces working in opposing directions. Driving forces facilitate change because they push employees in the desired direction. Restraining forces hinder change because they push employees in the opposite direction. Therefore, these forces must be analyzed and Lewin’s three-step model can help shift the balance in the direction of the planned change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). T INTERNATIONAL JOURNAL OF MNAGEMENT, BUSINESS, AND ADMINISTRATION 2_____________________________________________________________________________________ According to Lewin, the first step in the process of changing behavior is to unfreeze the existing situation or status quo. The status quo is considered the equilibrium state. Unfreezing is necessary to overcome the strains of individual resistance and group conformity. Unfreezing can be achieved by the use of three methods. First, increase the driving forces that direct behavior away from the existing situation or status quo. Second, decrease the restraining forces that negatively affect the movement from the existing equilibrium. Third, find a combination of the two methods listed above. Some activities that can assist in the unfreezing step include: motivate participants by preparing them for change, build trust and recognition for the need to change, and actively participate in recognizing problems and brainstorming solutions within a group (Robbins 564-65). Lewin’s second step in the process of changing behavior is movement. In this step, it is necessary to move the target system to a new level of equilibrium. Three actions that can assist in the movement step include: persuading employees to agree that the status quo is not beneficial to them and encouraging them to view the problem from a fresh perspective, work together on a quest for new, relevant information, and connect the views of the group to well-respected, powerful leaders that also support the change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). The third step of Lewin’s three-step change model is refreezing. This step needs to take place after the change has been implemented in order for it to be sustained or “stick” over time. It is high likely that the change will be short lived and the employees will revert to their old equilibrium (behaviors) if this step is not taken. It is the actual integration of the new values into the community values and traditions. The purpose of refreezing is to stabilize the new equilibrium resulting from the change by balancing both the driving and restraining forces. One action that can be used to implement Lewin’s third step is to reinforce new patterns and institutionalize them through formal and informal mechanisms including policies and procedures (Robbins 564-65). Therefore, Lewin’s model illustrates the effects of forces that either promote or inhibit change. Specifically, driving forces promote change while restraining forces oppose change. Hence, change will occur when the combined strength of one force is greater than the combined strength of the opposing set of forces (Robbins 564-65). Lippitt’s Phases of Change Theory Lippitt, Watson, and Westley (1958) extend Lewin’s Three-Step Change Theory. Lippitt, Watson, and Westley created a seven-step theory that focuses more on the role and responsibility of the change agent than on the evolution of the change itself. Information is continuously exchanged throughout the process. The seven steps are:", "title": "" }, { "docid": "3c4f4cda01b73c141c3f9b878ed734a3", "text": "Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. Our approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation. Thorough experimental evaluation conducted on public benchmarks suggests that the proposed approach significantly outperforms state-of the-art HR estimation methods in naturalistic conditions.", "title": "" } ]
scidocsrr